set
stringclasses 1
value | id
stringlengths 5
9
| chunk_text
stringlengths 1
115k
| chunk_num_tokens
int64 1
106k
| document_num_tokens
int64 58
521k
| document_language
stringclasses 2
values |
---|---|---|---|---|---|
train
|
0.168.7
|
(d) As in the proof of (b), $A$ is unitarily similar to $A'=\left[\begin{array}{cc} J_k & B\\ 0 & C\end{array}\right]$, where $B={\scriptsize\left[\begin{array}{c} 0\\ b\end{array}\right]}$ and $C$ is invertible. Then $A^j$ is unitarily similar to
$$A'^j=\begin{array}{ll} \ \ \ \overbrace{\ \hspace{15mm} \ }^{\displaystylelaystyle j} \ \ \overbrace{\ \hspace{23mm} \ }^{\displaystylelaystyle k-j} & \\ \left[\begin{array}{c|c}
\begin{array}{ccccccc}
0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\
& \cdot & & 0 & \ddots & \ddots & \vdots \\
& & \cdot & & \ddots & \ddots & 0 \\
& & & \cdot & & \ddots & 1 \\
& & & & \cdot & & 0 \\
& & & & & \cdot & \vdots \\
& & & & & & 0
\end{array} & \begin{array}{c} \\ 0
\\ \\ \\ B_j\end{array}\\ \hline 0 & C^j\end{array}
\right] & \hspace{-11mm}\begin{array}{l}
\left.\begin{array}{l} {\ } \\ {\ } \\ {\ } \\ {\ }\end{array}\right\}k-j\\
\left.\begin{array}{l}{\ } \\ {\ } \\ \vspace*{-2mm}{\ }\end{array}\right\}j\end{array}\end{array}.$$
for some $j$-by-$(n-k)$ matrix $B_j$. Since the first $k-j$ rows and the last $n-k$ rows of $A'^j$ are linearly independent, we infer that ${\rm ran\, }k A^j={\rm ran\, }k A'^j=(k-j)+(n-k)=n-j$ for $1\le j\le k$. \hspace{2mm} $\blacksquare$
The next corollary complements Corollary 2.5: it shows that any allowable value for $p(A)$ can actually be attained by some matrix $A$.
{\bf Corollary 3.2.} \emph{For any integers $n$ and $j$ satisfying $1\le j\le n-1$}, \emph{there is an $n$-by-$n$ matrix $A$ with $p(A)=j$}.
{\em Proof}. Let $A$ be a noninvertible $S_n$-matrix with the algebraic multiplicity of its eigenvalue 0 equal to $j$ (cf. \cite[Corollary 1.3]{2}). Then $p(A)=a(A)=j$ by Proposition 3.1. \hspace{2mm} $\blacksquare$
For an $n$-by-$n$ matrix $A=[a_{ij}]_{i,j=1}^n$ and an $m$-by-$m$ matrix $B$, their {\em tensor product} (or {\em Kronecker product}) $A\otimes B$ is the $(nm)$-by-$(nm)$ matrix
$$\left[
\begin{array}{ccc}
a_{11}B & \cdots & a_{1n}B \\
\vdots & & \vdots \\
a_{n1}B & \cdots & a_{nn}B
\end{array}
\right].$$
Basic properties of tensor products can be found in \cite[Chapter 4]{9}. Our main concern here is when $W(A)$ and $W(A\otimes A)$ are circular discs (centered at the origin). Problems of this nature have also been considered in \cite{1}. The main result of this section is the following theorem.
{\bf Theorem 3.3.} \emph{Let $A$ be an $S_n$-matrix}. \emph{Then the following conditions are equivalent}:
(a) \emph{$W(A)$ is a circular disc centered at the origin},
(b) \emph{$W(A\otimes A)$ is a circular disc centered at the origin}, \emph{and}
(c) \emph{$A$ is unitarily similar to $J_n$}.
In preparation for its proof, we need the next lemma.
{\bf Lemma 3.4.} \emph{Let $A$ and $B$ be an $n$-by-$n$ and $m$-by-$m$ nonzero matrices}, \emph{respectively}.
(a) $$a(A\otimes B)=\left\{\begin{array}{ll}
\min\{a(A), a(B)\} \ \ \ & \mbox{\em if} \ \ a(A), a(B)\ge 1,\\
a(A) & \mbox{\em if} \ \ a(B)=0,\\
a(B) & \mbox{\em if} \ \ a(A)=0.\end{array}\right.$$
(b) \emph{If $A$ and $B$ are partial isometries}, \emph{then so is $A\otimes B$}. \emph{The converse is false}.
(c) \emph{Assume that $A$ and $B$ are} (\emph{nonzero}) \emph{contractions}. \emph{Then $A$ and $B$ are partial isometries if and only if $A\otimes B$ is a partial isometry}.
(d) \emph{If $A$ and $B$ are} (\emph{nonzero}) \emph{contractions}, \emph{then} $p(A\otimes B)=\min\{p(A), p(B)\}$.
(e) \emph{$A$ is a partial isometry if and only if $A\otimes A$ is}. \emph{Thus}, \emph{in particular}, $p(A\otimes A)=p(A)$.
The proof makes use of the facts that (i) if $A$ (resp., $B$) is similar to $A'$ (resp., $B'$), then $A\otimes B$ is similar to $A'\otimes B'$, and (ii) if the eigenvalues of $A$ (resp., $B$) are $a_i$, $1\le i\le n$ (resp., $b_j$, $1\le j\le m$), then the eigenvalues of $A\otimes B$ are $a_ib_j$, $1\le i\le n$, $1\le j\le m$, counting algebraic multiplicities (cf. \cite[Theorem 4.2.12]{9}).
{\em Proof of Lemma $3.4$}. (a) Let $k_1=a(A)$ and $k_2=a(B)$, and assume that $2\le k_1\le k_2$. Let $J_{k_1}$ (resp., $J_{k_2}$) be a Jordan block in the Jordan form of $A$ (resp., $B$). Since
$$(J_{k_1}\otimes J_{k_2})^{k_1}=J_{k_1}^{k_1}\otimes J_{k_2}^{k_1}=0_{k_1}\otimes J_{k_2}^{k_1}=0_{k_1k_2}$$
and
$$(J_{k_1}\otimes J_{k_2})^{k_1-1}=J_{k_1}^{k_1-1}\otimes J_{k_2}^{k_1-1}\neq 0_{k_1k_2},$$
the size of the largest Jordan block in the Jordan form of $A\otimes B$ is $k_1$. This shows that $a(A\otimes B)=k_1=\min\{a(A), a(B)\}$. The other cases can be proven even easier.
(b) This is a consequence of the equivalence of (a) and (b) in Lemma 2.1 as $A^*A$ and $B^*B$ are projections, which implies the same for $(A\otimes B)^*(A\otimes B)$. The converse is false as seen by the example of $A=[2]$ and $B=[1/2]$.
(c) If $A\otimes B$ is a partial isometry, then $(A\otimes B)^*(A\otimes B)=(A^*A)\otimes(B^*B)$ is a projection by Lemma 2.1. Since the positive semidefinite $A^*A$ and $B^*B$ are both contractions, their eigenvalues $a_i$, $1\le i\le n$, and $b_j$, $1\le j\le m$, are such that $0\le a_i, b_j\le 1$ for all $i$ and $j$. As the eigenvalues of $(A^*A)\otimes(B^*B)$, the products $a_ib_j$, $1\le i\le n$, $1\le j\le m$, can only be $0$ and $1$. Thus the same is true for the $a_i$'s and $b_j$'s. It follows that $A^*A$ and $B^*B$ are projections. Therefore, $A$ and $B$ are partial isometries.
(d) This follows from (c) immediately.
(e) If $A\otimes A$ is a partial isometry, then $(A\otimes A)^*(A\otimes A) = (A^*A)\otimes (A^*A)$ is a projection with eigenvalues 0 and 1. But its eigenvalues are also given by $a_i a_j$, $1\le i, j \le n$, where the $a_i$'s are eigenvalues of $A^*A$. If any $a_i$ is nonzero and not equal to 1, then the same is true for $a_i^2$, which is a contradiction. Hence all the $a_i$'s are either 0 or 1. It follows that $A^*A$ is a projection and $A$ is a partial isometry. The converse was proven in (c). \hspace{2mm} $\blacksquare$
Finally, we are ready to prove Theorem 3.3.
{\em Proof of Theorem $3.3$}. To prove (a) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (c) (resp., (b) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (c)), note that the center of the circular $W(A)$ (resp., $W(A\otimes A)$) must be an eigenvalue of $A$ (resp., $A\otimes A$) (cf. \cite[Theorem]{3}). In particular, this says that $A$ (resp., $A\otimes A$) is noninvertible. Since the eigenvalues of $A\otimes A$ are $a_ia_j$, $1\le i, j\le n$, where the $a_i$'s are the eigenvalues of $A$ (cf. \cite[Theorem 4.2.12]{9}), the noninvertibility of $A\otimes A$ also implies that of $A$. Hence $p(A)=a(A)$ or $\infty$ by Proposition 3.1 (b). If $p(A)=\infty$, then we have already had (c) by Proposition 3.1 (c). Thus we may assume that $p(A)=a(A)$. In this case, we also have
$$p(A\otimes A)=p(A)=a(A)=a(A\otimes A)$$
by Lemma 3.4 (d) (or (e)) and (a). Applying Theorem 2.6, we obtain the unitary similarity of $A$ (resp., $A\otimes A$) to a direct sum of Jordan blocks. It follows that the only eigenvalue of $A$ (resp., $A\otimes A$ and hence of $A$) is 0. Hence $A$ is unitarily similar to $J_n$, that is, (c) holds.
The implication (c) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (a) is trivial since, under (c), we have $W(A)=\{z\in\mathbb{C} : |z|\le\cos(\pi/(n+1))\}$. For (c) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (b), note that (c) implies that $A$ is unitarily similar to $e^{i\theta}A$ for all real $\theta$. Hence $A\otimes A$ is unitarily similar to $e^{i\theta}(A\otimes A)$ for real $\theta$. Thus $W(A\otimes A)$ is a circular disc centered at the origin. This also follows from \cite[Proposition 2.8]{1}. \hspace{2mm} $\blacksquare$
We remark that the equivalence of (a) and (c) in Theorem 3.3 was shown before in \cite[Lemma 5]{12} by a completely different proof.
We end this section with two examples and one open question. The examples show that, in contrast to the case of $S_n$-matrices, the conditions of $W(A)$ and $W(A\otimes A)$ being circular discs centered at the origin are independent of each other for a general matrix $A$.
{\bf Example 3.5.} Let $A=[\lambda]\oplus J_2$, where $1/2<|\lambda|\le 1/\sqrt{2}$. Then
$$W(A\otimes A)=W([\lambda^2]\oplus\lambda J_2\oplus\lambda J_2\oplus\left[\begin{array}{cc} 0_2 & J_2\\ 0 & 0_2\end{array}\right])=\{z\in\mathbb{C} : |z|\le\frac{1}{2}\},$$
but $W(A)$, being the convex hull of $\{\lambda\}\cup\{z\in\mathbb{C} : |z|\le 1/2\}$, is obviously not a circular disc.
| 3,412 | 31,795 |
en
|
train
|
0.168.8
|
(b) This is a consequence of the equivalence of (a) and (b) in Lemma 2.1 as $A^*A$ and $B^*B$ are projections, which implies the same for $(A\otimes B)^*(A\otimes B)$. The converse is false as seen by the example of $A=[2]$ and $B=[1/2]$.
(c) If $A\otimes B$ is a partial isometry, then $(A\otimes B)^*(A\otimes B)=(A^*A)\otimes(B^*B)$ is a projection by Lemma 2.1. Since the positive semidefinite $A^*A$ and $B^*B$ are both contractions, their eigenvalues $a_i$, $1\le i\le n$, and $b_j$, $1\le j\le m$, are such that $0\le a_i, b_j\le 1$ for all $i$ and $j$. As the eigenvalues of $(A^*A)\otimes(B^*B)$, the products $a_ib_j$, $1\le i\le n$, $1\le j\le m$, can only be $0$ and $1$. Thus the same is true for the $a_i$'s and $b_j$'s. It follows that $A^*A$ and $B^*B$ are projections. Therefore, $A$ and $B$ are partial isometries.
(d) This follows from (c) immediately.
(e) If $A\otimes A$ is a partial isometry, then $(A\otimes A)^*(A\otimes A) = (A^*A)\otimes (A^*A)$ is a projection with eigenvalues 0 and 1. But its eigenvalues are also given by $a_i a_j$, $1\le i, j \le n$, where the $a_i$'s are eigenvalues of $A^*A$. If any $a_i$ is nonzero and not equal to 1, then the same is true for $a_i^2$, which is a contradiction. Hence all the $a_i$'s are either 0 or 1. It follows that $A^*A$ is a projection and $A$ is a partial isometry. The converse was proven in (c). \hspace{2mm} $\blacksquare$
Finally, we are ready to prove Theorem 3.3.
{\em Proof of Theorem $3.3$}. To prove (a) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (c) (resp., (b) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (c)), note that the center of the circular $W(A)$ (resp., $W(A\otimes A)$) must be an eigenvalue of $A$ (resp., $A\otimes A$) (cf. \cite[Theorem]{3}). In particular, this says that $A$ (resp., $A\otimes A$) is noninvertible. Since the eigenvalues of $A\otimes A$ are $a_ia_j$, $1\le i, j\le n$, where the $a_i$'s are the eigenvalues of $A$ (cf. \cite[Theorem 4.2.12]{9}), the noninvertibility of $A\otimes A$ also implies that of $A$. Hence $p(A)=a(A)$ or $\infty$ by Proposition 3.1 (b). If $p(A)=\infty$, then we have already had (c) by Proposition 3.1 (c). Thus we may assume that $p(A)=a(A)$. In this case, we also have
$$p(A\otimes A)=p(A)=a(A)=a(A\otimes A)$$
by Lemma 3.4 (d) (or (e)) and (a). Applying Theorem 2.6, we obtain the unitary similarity of $A$ (resp., $A\otimes A$) to a direct sum of Jordan blocks. It follows that the only eigenvalue of $A$ (resp., $A\otimes A$ and hence of $A$) is 0. Hence $A$ is unitarily similar to $J_n$, that is, (c) holds.
The implication (c) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (a) is trivial since, under (c), we have $W(A)=\{z\in\mathbb{C} : |z|\le\cos(\pi/(n+1))\}$. For (c) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (b), note that (c) implies that $A$ is unitarily similar to $e^{i\theta}A$ for all real $\theta$. Hence $A\otimes A$ is unitarily similar to $e^{i\theta}(A\otimes A)$ for real $\theta$. Thus $W(A\otimes A)$ is a circular disc centered at the origin. This also follows from \cite[Proposition 2.8]{1}. \hspace{2mm} $\blacksquare$
We remark that the equivalence of (a) and (c) in Theorem 3.3 was shown before in \cite[Lemma 5]{12} by a completely different proof.
We end this section with two examples and one open question. The examples show that, in contrast to the case of $S_n$-matrices, the conditions of $W(A)$ and $W(A\otimes A)$ being circular discs centered at the origin are independent of each other for a general matrix $A$.
{\bf Example 3.5.} Let $A=[\lambda]\oplus J_2$, where $1/2<|\lambda|\le 1/\sqrt{2}$. Then
$$W(A\otimes A)=W([\lambda^2]\oplus\lambda J_2\oplus\lambda J_2\oplus\left[\begin{array}{cc} 0_2 & J_2\\ 0 & 0_2\end{array}\right])=\{z\in\mathbb{C} : |z|\le\frac{1}{2}\},$$
but $W(A)$, being the convex hull of $\{\lambda\}\cup\{z\in\mathbb{C} : |z|\le 1/2\}$, is obviously not a circular disc.
{\bf Example 3.6.} Let $$A=\left[\begin{array}{ccc} 0 & -\sqrt{2} & 1\\ 0 & 0 & 1\\ 0 & 0 & \sqrt{2}/2\end{array}\right].$$
Then, for any real $\theta$,
$${\rm Re\, }(e^{i\theta}A)=\frac{1}{2}\left[\begin{array}{ccc} 0 & -\sqrt{2}e^{i\theta} & e^{i\theta}\\ -\sqrt{2}e^{-i\theta} & 0 & e^{i\theta}\\ e^{-i\theta} & e^{-i\theta} & \sqrt{2}\cos\theta\end{array}\right],$$
whose maximum eigenvalue can be computed to be always equal to 1. Hence $W(A)=\overline{\mathbb{D}}$. On the other hand, a long and tedious computation shows that the characteristic polynomial $p(z)\equiv\det(zI_9-2{\rm Re\, }(A\otimes A))$ of $2{\rm Re\, }(A\otimes A)$ can be factored as
\begin{equation}\label{e10}
z^2(z^2-3)(z^5-z^4-17z^3+17z^2+46z-48).
\end{equation}
Assume that $W(A\otimes A)=\{z\in\mathbb{C} : |z|\le \sqrt{r}/2\}$ for some $r>0$. Then the maximum and minimum eigenvalues of $2{\rm Re\, }(A\otimes A)$ are $\sqrt{r}$ and $-\sqrt{r}$, respectively. Note that $p(2)=-8<0$ and $p(\infty)=\infty$ imply that $p$ has a zero larger than 2. Hence $r\neq 3$. Similarly, we have $r\neq -3$. Since both $\sqrt{r}$ and $-\sqrt{r}$ are zeros of $p$, we also have
\begin{align}\label{e11}
& \ p(z)= z^2(z^2-3)(z^2-r)(z^3+az^2+bz+c) \nonumber\\
=& \ z^2(z^2-3)(z^5+az^4+(b-r)z^3+(c-ar)z^2-brz-cr)
\end{align}
for some real $a$, $b$ and $c$. Comparing the coefficients of the last factors in ({\rm Re\, }f{e10}) and ({\rm Re\, }f{e11}) yields that $a=-1$, $b-r=-17$, $c-ar=17$, $br=-46$ and $cr=48$. From these, we deduce that $c+r=17$ and hence $b=-c$. This leads to $-46=br=-cr$, which contradicts $cr=48$. Thus $W(A\otimes A)$ cannot be a circular disc at 0.
The matrix $A$ in the preceding example was also considered in \cite[Example 3.4]{1} for another purpose.
{\bf Question 3.7.} Is it true that, for any integers $n$, $j$ and $k$ satisfying $1\le j\le k\le n-1$, there is an $n$-by-$n$ matrix $A$ with $p(A)=j$ and $a(A)=k$? This is a refinement of Corollary 3.2. It is true if $k<n/2$. Indeed, in this case, we have $j\le k\le n-k-1$. Let $A=J_k\oplus B$, where $B$ is a noninvertible $S_{n-k}$-matrix whose eigenvalue 0 has algebraic multiplicity $j$. Then $p(A)=p(B)=a(B)=j$ by Proposition 3.1. On the other hand, we obviously have $a(A)=k$.
\end{document}
| 2,444 | 31,795 |
en
|
train
|
0.169.0
|
\begin{document}
\pagestyle{myheadings}
\title{Leader-Following Consensus of Multiple Linear Systems Under Switching Topologies: An Averaging Method}
\author{Wei Ni, Xiaoli Wang and Chun Xiong}
\contact{Wei}{Ni}{School of Science, Nanchang University, Nanchang 330031,
P. R. China.}{[email protected]}
\contact{Xiaoli}{Wang}{School of Information Science and Engineering,
Harbin Institute of Technology at Weihai,
Weihai 264209, P. R. China.}{[email protected]}
\contact{Chun}{Xiong}{School of Science, Nanchang University, Nanchang 330031,
P. R. China.}{[email protected]}
\markboth{W. Ni, X. Wang and C. Xiong} {Leader-Following Consensus of Multiple Linear Systems}
\maketitle
\begin{abstract}
The leader-following consensus of multiple linear time invariant (LTI) systems under switching topology is considered.
The leader-following consensus problem consists of designing for each agent a distributed protocol to make all agents track a leader vehicle, which has the same LTI dynamics as the agents.
The interaction topology describing the information exchange of these agents is time-varying. An averaging method is
proposed. Unlike the existing results in the literatures which assume the LTI agents to be neutrally stable, we relax this condition, only making assumption that the LTI agents are stablizable and detectable. Observer-based leader-following consensus is also considered.
\end{abstract}
\keywords{Consensus, multi-agent systems, averaging method}
\classification{93C15, 93C35}
\section{Introduction}
Multi-agent system is a hot topic in a variety of research communities, such as robotics,
sensor networks, artificial intelligence, automatic control and biology.
Of particular interest in this field is the consensus problem, since it lays foundations for many consensus-related problem, including formation, flocking and swarming. We refer to survey papers \cite{olfati2007,ren2007} and
references therein for details.
Integrator and double integrator models are the simplest abstraction, upon which a large part of results on consensus
of multi-agent systems have been based (see \cite{ren2005,olfati2004,olfati2007,jadb2003,cheng2008,hong2007}). To deal with more complex models, a number of recent papers are devoted
to consensus of multiple LTI systems
\cite{zhang2011,wang2008,ni2010,scardovi2009,seo2009,liu2009,khoo2009,Yoshioka2008,namerikawa2008,wang2009,wang2010,wang2011}.
These results keep most of the concepts provided by earlier developments, and provide new design and analysis technique,
such as LQR approach, low gain approach, $H_{\infty}$ approach, parametrization and geometric approach, output regulation approach, and homotopy based approach. However, most of these results
\cite{zhang2011,wang2008,ni2010,seo2009,liu2009,khoo2009,Yoshioka2008,namerikawa2008} mainly focus on fixed interaction topology, rather than time-varying
topology. How the switches of the interaction topology and agent dynamics jointly affect
the collective behavior of the multi-agent system? Attempts to understand this issue had been hampered by the lack of suitable analysis tools.
The results of Scardovi et al. \cite{scardovi2009} and Ni et al. \cite{ni2010} are mentioned here,
because of their contributions to dealing with switching topology in the setup of high-order agent model. However, when dealing with switching topology,
\cite{scardovi2009} and \cite{ni2010} assumed that the system of each agent is neutrally stable; thus it has no positive real parts eigenvalues.
This assumption was widely assumed in the literatures when the interaction topology is fixed or switching. Unfortunately, when the agent is stabilizable and detectable rather than neutrally stable, and when the interaction topology is switching, there is no
result reported in the literature to investigate the consensus of these agents.
To deal with switching graph topology and to remove the neutral stability condition , we provide a modified averaging approach, which is motivated by \cite{aeyels1999,bellman1985,kosut1987}.
The averaging approach was initially proposed by Krylov and Bogoliubov in celestial mechanics \cite{krylov1943}, and
was further developed in the work of \cite{bogoliubov1961,krasnosel1955}; for more details refer to the recent book \cite{sanders2007}. Closely related to the averaging theory is the stability of fast time-varying nonautonomous systems \cite{aeyels1999,kosut1987,bellman1985}, and more specifically the fast switching systems \cite{stilwell2006,teel2011}. The modified approach in this paper is motivated by the work of Stilwell et al. \cite{stilwell2006}, and also the work of
\cite{aeyels1999,kosut1987,bellman1985}. Although this work borrows the idea from \cite{stilwell2006}, the main difference of this work from \cite{stilwell2006} is as follows. The synchronization in \cite{stilwell2006} is achieved under under fast switching condition; that is, synchronization is realized under two time scales: a time scale $t$ for the agent dynamics and a time scale for the switching signal parameterized by $t/\varepsilon$ with $\varepsilon$ small enough. In our paper, we further establish that the two time scales can be made to be the same and thus the consensus in our paper is not limited to fast switching case. Furthermore, We present an extended averaging approach for consensus: a sequence of averaged systems (rather a single averaged systems) are the indicators of consensus of the the multi-agent systems. This allows to obtain more relax conditions for consensus. At last, We give further investigation on how to render these sequence of averaged systems to achieve consensus, and thus ensure the consensus of the original multi-agent systems. This was not investigated in [22]. The result in our paper shows that if there exists an infinite sequence of uniformly bounded and contiguous time intervals such that during each such interval the interaction graph is jointly connected and if the dwell time for each subgraph is appropriately small, then consensus can be achieved.
In summary, the contributions of this paper are as follows:
\begin{itemize}
\item Averaging method is applied to leader-following consensus of multiple LTI systems.
\item Results are obtained for a wider class of agent dynamics which is stabilizable and detectable than the existing class of neutrally stable agent dynamics.
\item The agent dynamics and the switching signal considered in this paper have the same time scale, rather than having different time scales considered in \cite{stilwell2006}. Thus the results in our paper are not limited to fast time switching case.
\end{itemize}
The rest of this paper is organized as follows.
Section 2 contains the problem formulation and some preliminary results.
Section 3 provides the main result of leader-following consensus, and extensions are made in
Section 4 which devotes to observer-based protocols design and analysis.
Two illustrated examples are presented in Section 5.
Section 6 is a brief conclusion.
| 1,967 | 15,069 |
en
|
train
|
0.169.1
|
\section{Problem Formulation and Preliminaries}
This section presents the multi-agent system model, with each agent being a stabilizable LTI system, which includes integrator or double integrator as its special cases. The leader-following consensus problem is formulated by use of the graph theory. Some supporting lemmas are also included here.
Consider $N$ agents with the same dynamics
\begin{eqnarray}\label{2.1}
\dot x_i=Ax_i+Bu_i,\quad i=1, 2, \cdots, N,
\end{eqnarray}
where $x_i\in \mathbb{R}^n$ is the agent $i$'s state, and $u_i\in
\mathbb{R}^m$ is agent $i$'s input through which the interactions
or coupling between agent $i$ and other agents are realized.
The matrix $B$ is of full column rank.
The state information is transmitted among these agents, and the agents together with the transmission channels form a network.
We use a directed graph $\mathcal {G} =(\mathcal{V}, \mathcal {E})$ to describe the topology of this network, where $\mathcal{V}=\{1,2,\cdots,N\}$ is the set of nodes
representing $N$ agents and $\mathcal{E} \subset \mathcal{V} \times \mathcal{V}$ is the set of ordered
edges $(i, j)$, meaning that agents $i$ can send information to agent $j$.
The leader, labeled as $i=0$, has linear dynamics as
\begin{eqnarray}\label{2.2}
\dot x_0=Ax_0,
\end{eqnarray}
where $x_0\in \mathbb{R}^n$ is the state.
Referring agent $i\in \{1, \cdots, N\}$ as follower agent, the leader's dynamics is obviously independent of follower agents. More specifically, the leader just sends information to some follower agents, without receiving information from them.
The interaction structure of the whole agents $\{0,1,\cdots,N\}$ is described by an extended directed graph
$\bar{\mathcal{G}}=(\bar{\mathcal{V}}, \bar{\mathcal{E}})$, which consists of graph
$\mathcal{G}$, node $0$ and directed edges from node $0$ to its information-sending follower nodes.
\begin{definition} {\bf (Definitions related to graph)}
Consider a graph $\bar{\mathcal{G}}=(\bar{ \mathcal{V}}, \bar{ \mathcal{E}})$ with
$\bar{ \mathcal{V}}=\{0, 1, \cdots, N\}$.
\begin{itemize}
\item The set of neighbors of node $i$ relative to subgraph $\mathcal{G}$ of $\bar{\mathcal{G}}$
is denoted by $\mathcal{N}_i=\{j\in \mathcal{V}: (j,i)\in \mathcal{E}, j\neq i\}$.
\item A directed path is a sequence of edges $(i_1,i_2),(i_2,i_3),(i_3,i_4),\cdots$ in that graph.
\item Node $i$ is reachable to node $j$ if there is a directed path from $i$ to $j$.
\item The graph $\bar{\mathcal{G}}$ is called connected if node $0$ is reachable to any other node.
\end{itemize}
\end{definition}
\begin{definition}{\bf (Structure matrices of graph)}\label{def2}
\begin{itemize}
\item For a directed graph $\mathcal{G}$ on nodes
$\{1,\cdots,N\}$, it structure is described by its adjacency matrix $\mathcal {A}\in \mathbb{R}^{N\times N}$,
whose $ij$-th entry is 1 if
$(j,i)$ is an edge of $\mathcal{G}$ and 0 if it is not; or by its Laplacian matrix $\mathcal{L}=-\mathcal {A}+ \Lambda$, where
$\Lambda\in \mathbb{R}^{N\times N}$ is the in-degree matrix of $\mathcal{G}$ which is diagonal with $i$-th diagonal element be $|\mathcal{N}_i|$, the cardinal of $\mathcal{N}_i$, which equals $\sum_{j\neq i}a_{ij}$.
\item For a directed graph $\bar{\mathcal{G}}$ on the node set $\{0, 1, \cdots, N\}$, one uses a matrix $\mathcal{H}=\mathcal{L}+\mathcal{D}$ to describe it structure,
where $\mathcal{L}$ is the Laplacian matrix of its subgraph $\mathcal{G}$ and $\mathcal{D}=diag(d_1, \cdots, d_N)$ with $d_i=1$ if node $(0,i)$ is an edge of graph $\bar{\mathcal{G}}$, and with $d_i=0$ otherwise. Obviously, the structure of the graph $\bar{\mathcal{G}}$ can also be described by its Laplacian $\bar{\mathcal{L}}$.
\end{itemize}
\end{definition}
It is noted that the graph describing the interaction topology of nodes $\{0, 1, \cdots, N\}$
can vary with time. To account this we need to consider all
possible graphs $\{\bar{\mathcal{G}}_p: p\in \mathcal {P}\}$, where $\mathcal
{P}$ is an index set for
all graphs defined on nodes $\{0,1,\cdots,N\}$. Obviously, $\mathcal{P}$ is a finite set.
We use $\{\mathcal{G}_p: p\in \mathcal {P}\}$ to denote subgraphs defined on
vertices $\{1,\cdots,N\}$. The dependence of the graphs upon time
can be characterized by a switching law $\sigma: [0,
\infty)\rightarrow \mathcal {P}$ which is a piece-wise constant and right continuous map; that is, at each time $t$, the
underlying graph is $\bar{\mathcal{G}}_{\sigma(t)}$.
For each agent $i\in \{1, \cdots, N\}$, if agent $j$ is a neighbor of agent
$i$,
the relative information $x_j-x_i$ is feedback to agent $i$ with a gain matrix $K$ to be design later.
The leader-following consensus problem consists of designing for each agent $i\in \{1, \cdots, N\}$ a distributed protocol which is a linear feedback, or a dynamical feedback of
\begin{eqnarray}\label{protocol}
z_i=\Sigma_{j\in \mathcal{N}_i(t)}(x_j-x_i)+d_i(t)(x_0-x_i)
\end{eqnarray}
such that the closed-loop systems (\ref{2.1})-(\ref{2.2}) achieve the following collected behaviors:
\begin{eqnarray}\label{leaderfollowing}
\lim_{t\rightarrow \infty}\|x_i(t)-x_0(t)\|=0, \quad i=1, \cdots, N.
\end{eqnarray}
To solve the leader-following consensus problem, the following assumption is proposed throughout this paper.
\begin{assumption}\label{stabilizable}
The pair $(A,B)$ is stabilizable.
\end{assumption}
The following result presents an averaging method for stability of fast time-varying linear systems.
The general result for nonlinear systems can be found in \cite{aeyels1999,kosut1987,bellman1985}.
For convenient of our later use, we rewrite the result in the following form.
\begin{lemma}\label{lemma2}
Consider a linear time-varying systems $\dot x(t)=A(t)x(t)$ with $A(\cdot): \mathbb{R}\rightarrow \mathbb{R}^{n\times n}$. If there exists an increasing sequence of times $t_k, k\in \mathbb{Z}$, with $t_k \rightarrow +\infty$ as $k\rightarrow +\infty$, $t_k \rightarrow -\infty$ as $k\rightarrow -\infty$, and $t_{k+1}-t_k\leq T$ for some $T>0$, such that $\forall t_k$, the following average systems
\begin{eqnarray}
\dot {\bar x}(t)=\bar A_k \bar x(t), \quad \bar A_k=\frac{\int_{t_k}^{t_{k+1}}A(t)dt}{t_{k+1}-t_k}, k=0,1,\cdots
\end{eqnarray}
are asymptotically stable, then there exists $\alpha^*>0$ such that the following fast time-varying system
\begin{eqnarray}
\dot x(t)=A(\alpha t)x(t)
\end{eqnarray}
is asymptotically stable for all $\alpha> \alpha^*$.
\end{lemma}
\begin{remark}\label{remark1}
It has been shown in \cite[Remark 4]{aeyels1999} that
the value $\alpha^*$ can be estimated from $T$ by solving the equation
\begin{eqnarray}\label{alpha}
e^{\frac{KT}{\alpha}}\frac{T}{\alpha}=\frac{1}{K}\left(-1+\sqrt{1+\frac{v}{K_vKT}}\right)
\end{eqnarray}
for $\alpha$, where $T>0$ is defined above and $K_v>0, K>0, v>0$ are parameters which can be determined from the system matrix; furthermore, this equation
has for every $T>0$, $K_v>0, K>0, v>0$ exactly one positive solution $\alpha$.
Now fixing $K_v>0, K>0, v>0$, we show that as $T\rightarrow 0$ the corresponding solution $\alpha=\alpha(T)\rightarrow 0$; indeed, $T\rightarrow 0$ rends the right hand side of (\ref{alpha}) go to infinity, thus requiring $\frac{T}{\alpha}$ on the left hand side of (\ref{alpha}) to go to infinity, being thus resulting in
$\alpha \rightarrow 0$.
Therefore, appropriately choosing a small $T>0$ gives
a solution $\alpha=\alpha^*<1$.
\end{remark}
The following rank property of Kronecker product will be used. The proof is
straightforward, being thus omitted.
\begin{lemma}\label{kron}
For any matrices $P, Q_1, \cdots, Q_n$ of appropriate dimensions, the following property holds:
\begin{eqnarray*}
rank({P \otimes \left(
\begin{array}{c}
Q_1 \\
Q_2 \\
\vdots\\
Q_n
\end{array}
\right)})
=rank(
\left(
\begin{array}{c}
P\otimes Q_1 \\
P\otimes Q_1 \\
\vdots\\
P\otimes Q_n
\end{array}
\right))
\end{eqnarray*}
\end{lemma}
The following result will also be used later.
\begin{lemma}\label{lemma4}
Consider an $n$-order differential system $\dot x(t)=A_1x(t)+A_2y(t)$ with
$A_1\in \mathbb{R}^{n \times n},A_2\in \mathbb{R}^{n \times m}$, and $y(t)\in \mathbb{R}^m$. If $A_1$ is Hurwitz
and $\lim_{t\rightarrow \infty}y(t)=0$, then $\lim_{t\rightarrow \infty}x(t)=0$.
\end{lemma}
{\bf Proof:} Let $x(t,x_0,y(t))$ denote the solution of $\dot x(t)=A_1x(t)+A_2y(t)$
with initial state $x_0$ at $t=0$.
Since $A_1$ is Hurwitz, there exist positive number $\alpha,\gamma_1$ and $\gamma_2$ such that
\begin{eqnarray*}
\|x(t,x_0,y(t))\|\leq \gamma_1 \|x_0\|e^{-\alpha t}+\gamma_2 \|y(t)\|_{\infty},
\end{eqnarray*}
where $\|y(t)\|_{\infty}={\rm ess\,sup}_{t\geq 0}\|y(t)\|$.
Since $\lim_{t\rightarrow \infty}y(t)=0$, then for any $\varepsilon>0$,
there exists a $T>0$ such that
$\gamma_2\|y(t)\|<\varepsilon /2$.
Similarly, $\gamma_1 \|x_0\|e^{-\alpha t}<\varepsilon /2$. Therefore, $\|x(t,x_0,y(t))\|<\varepsilon$.
This completes the proof.
$\blacksquare$
| 3,064 | 15,069 |
en
|
train
|
0.169.2
|
\section{Leader-Following Consensus of Multiple LTI Systems}
This section presents the leader-following consensus of multiple stablizable LTI systems under switching topology. Unlike most results in the literature, we do not impose assumption that $A$ is neutrally stable.
For completeness, we first review a result from \cite{ni2010} when the graph is fixed and undirected.
\begin{theorem}
For the multi-agent system {\rm(\ref{2.1})}-{\rm(\ref{2.2})} associated with connected graph $\bar{\mathcal{G}}$ under Assumption
{\rm\ref{stabilizable}}, let $P>0$ be a solution to the Riccati inequality
\begin{eqnarray}\label{riccati}
PA+A^TP-2\delta PBB^TP+I_n<0,
\end{eqnarray}
where $\delta$ is the smallest eigenvalue of the structure matrix $\mathcal H$ of graph $\bar{\mathcal{G}}$(which is shown to be positive therein), then under the control law $u_i=Kz_i$ with $K=B^TP$
all the agents follow the leader from any initial conditions.
\end{theorem}
We now treat the leader-following consensus problem under switching topologies and directed graph case.
Denoting the state error between the agent $i$ and the leader as $\varepsilon_i=x_i-x_0$, then the dynamics of
$\varepsilon_i$ is
\begin{eqnarray*}
\dot \varepsilon_i
&=& A\varepsilon_i+Bu_i\\
&=& A\varepsilon_i+BK\sum_{j\in \mathcal{N}_i(t)}(\varepsilon_j-\varepsilon_i)-BKd_i(t)\varepsilon_i, \quad i=1,\cdots,N.
\end{eqnarray*}
By introducing
$\varepsilon=(\varepsilon_1^T, \varepsilon_2^T, \cdots,\varepsilon_N^T)^T$,
one has
\begin{eqnarray}\label{error}
\dot \varepsilon &=& (I_N \otimes A)\varepsilon-(I_N \otimes B) (\mathcal{L}_{\sigma(t)}\otimes I_m) (I_N \otimes K)\varepsilon-(I_N \otimes B)(\mathcal{D}_{\sigma(t)}\otimes I_m) (I_N \otimes K)\varepsilon \nonumber\\
&=& [I_N \otimes A-(\mathcal{L}_{\sigma(t)}+\mathcal{D}_{\sigma(t)})\otimes (BK)]\varepsilon \nonumber\\
&=& [I_N \otimes A-\mathcal{H}_{\sigma(t)}\otimes (BK)]\varepsilon.
\end{eqnarray}
The remaining issue is finding conditions on the switching topologies( i.e., conditions on the switching law $\sigma$) under which one can synthesize a feedback gain matrix $K$ such that the zero solution of systems (\ref{error}) is asymptotically stable.
As treated in \cite{hong2007}, consider an infinite sequence of
nonempty, bounded and contiguous time intervals $[t_k, t_{k+1}), k=0,1,\cdots,$
with $t_0=0$, $t_{k+1}-t_k\leq T$ for some constant $T>0$. Suppose
that in each interval $[t_k, t_{k+1})$ there is a sequence of $m_k$
nonoverlapping subintervals
\begin{eqnarray*}
[t_k^1, t_k^2), \cdots, [t_k^j, t_k^{j+1}), \cdots, [t_k^{m_k}, t_k^{m_k+1}), \quad t_k=t_k^1, \quad t_{k+1}=t_k^{m_k+1},
\end{eqnarray*}
satisfying $t_k^{j+1}-t_k^j\geq \tau, 1\leq j\leq m_k$ for a given constant $\tau >0$,
such that during
each of such subintervals, the interconnection topology does not
change. That is, during each time interval $[t_k^j, t_k^{j+1})$, the
graph $\bar{\mathcal{G}}_{\sigma(t)}$ is fixed and we denote it by $\bar{\mathcal{G}}_{k_j}$.
The number $\tau$ is usually call the minimal dwell time of the graphs. The $\tau >0$
can be arbitrarily small and the existence of such an number ensures that Zero phenomena dose not happen.
During each time interval $[t_k, t_{k+1})$, some or all of $\bar{\mathcal{G}}_{k_j}, j=1,\cdots, m_k$ are permitted to be disconnected.
We only require the graph to be jointly connected, which is defined as follows:
\begin{definition}{\bf (Joint Connectivity)}
\begin{itemize}
\item The union of a collection of graphs is a graph whose vertex and edge sets are the unions of the vertex and edge sets of the
graphs in the collection.
\item The graphs are said to be jointly connected across the time interval $[t, t+T], T>0$ if the union of graphs
$\{\bar{\mathcal{G}}_{\sigma(s)}: s\in [t, t+T]\}$ is connected.
\end{itemize}
\end{definition}
\begin{assumption}\label{jc}
The graphs $\bar{\mathcal{G}}_{\sigma(t)}$ are jointly connected across each interval $[t_k,
t_{k+1}), k=0,1,\cdots$, with their length being uniformly up-bounded by a positive number $T$ and lower-bounded by a positive number $\tau$.
\end{assumption}
The following lemma gives a property of jointly connected graphs. When the graph is undirected, this result has been
reported in \cite{hong2007,hong2007b}. We show this result is still valid when the graph is directed; its proof is
put in the appendix.
\begin{lemma}\label{lemma1}
Let matrices
$\mathcal{H}_{1},\cdots,\mathcal{H}_{m}$ be associated with the graphs $\bar{\mathcal{G}}_{1},\cdots, \bar{\mathcal{G}}_m$ respectively. If these graphs
are jointly connected, then \\
(1) all the eigenvalues of $\sum_{i=1}^{m} \mathcal{H}_i$ have positive real parts.\\
(2) all the eigenvalues of $\sum_{i=1}^{m} \tau_i \mathcal{H}_i$ have positive real parts, where $\tau_i>0$ and $\sum_{i=1}^m \tau_i=1$.
\end{lemma}
With this, an averaging method by using Lemma \ref{lemma2} is applied to study the stability of system
(\ref{error}), whose average system during each time interval $[t_k, t_{k+1}), k=0, 1, \cdots$, is
\begin{eqnarray}\label{averagesystem}
\dot {\bar x}=\bar A_k \bar x
\end{eqnarray} with
\begin{eqnarray*}
\bar A_k &=&\frac{\int_{t_k}^{t_{k+1}}[I_N \otimes A-\mathcal{H}_{\sigma(t)}\otimes (BK)]dt}{t_{k+1}-t_k}\\
&=& I_N \otimes A-\bar{\mathcal{H}}_{[t_k,t_{k+1}]} \otimes (BK),
\end{eqnarray*}
where $\bar{\mathcal{H}}_{[t_k,t_{k+1}]}=\sum_{t\in [t_k, t_{k+1})}\tau_{\sigma(t)}\mathcal{H}_{\sigma(t)}$, $\tau_j=(t_k^{j+1}-t_k^j)/(t_{k+1}-t_k)$, $j=k_1, \cdots, k_{m_k}$.
Define by $Re\lambda_{min}(\cdot)$ the least real part of the eigenvalues of a matrix.
Define
\begin{eqnarray}\label{delta}
\bar \delta=\min \big\{\inf_{\tiny{(\tau_{k_1}, \cdots, \tau_{k_{m_k}})\in \Gamma_k}}Re\lambda_{min}
(\bar{\mathcal{H}}_{[t_k,t_{k+1})})|k=0,1,\cdots \big \},
\end{eqnarray}
where
where
\begin{eqnarray*}
\Gamma_k=\{(\tau_{k_1}, \cdots, \tau_{k_{m_k}})|\sum_{j=1}^{m_k} \tau_{k_j}, \tau \leq \tau_j < 1, j=1,\cdots, m_k\}.
\end{eqnarray*}
Noting that $Re\lambda_{min}(\bar{\mathcal{H}}_{[t_k,t_{k+1})})$ depends continuously on $\tau_{k_1}, \cdots, \tau_{k_{m_k}}$ and the set $\Gamma_k$ is compact, also by referring to Lemma \ref{lemma2},
one has
\begin{eqnarray*}
\inf_{(\tau_{k_1}, \cdots, \tau_{k_{m_k}})\in \Gamma_k}Re\lambda_{min}
(\bar{\mathcal{H}}_{[t_k,t_{k+1})})
&=&Re\lambda_{min}(\tau_{k_1}^* \mathcal{H}_{k_1}+\cdots+\tau_{k_{m_k}}^*\mathcal{H}_{k_{m_k}} )>0,
\end{eqnarray*}
which, together with the fact that the set in (\ref{delta}) is finite due to finiteness of all graphs, implies that
$\bar \delta$ in (\ref{delta}) is a positive number.
Then the leader-following consensus control can be achieved through the following theorem.
\begin{theorem}\label{theorem2}
For the multi-agent system {\rm(\ref{2.1})}-{\rm(\ref{2.2})} under Assumption {\rm\ref{stabilizable}}, associated with switched graphs $\bar{\mathcal{G}}_{\sigma(t)}$ under Assumption
{\rm\ref{jc}} with $T$ small enough, let $P>0$ be a solution to the Riccati inequality
\begin{eqnarray}\label{riccati2}
PA+A^TP-2 \bar \delta PBB^TP+I_n<0,
\end{eqnarray}
then under the control law $u_i=Kz_i$ with $K=B^TP$
all the agents follow the leader from any initial conditions.
\end{theorem}
{\bf Proof:} We first prove that for each $k=0,1,\cdots$, the average system
(\ref{averagesystem}) is asymptotically stable.
To this end, let $T_k \in \mathbb{R}^{N\times N}$ be
an unitary matrix such that $T_k \bar{\mathcal H}_{[t_k,t_{k+1}]}T^*_k=\bar \Lambda_k$ be an upper triangular matrix with the diagonal elements $\bar\lambda_1^k, \cdots, \bar\lambda_N^k$ be the eigenvalues of matrix
$\bar{\mathcal H}_{[t_k,t_{k+1}]}$, where $T^*_k$ denote the Hermitian adjoint of matrix $T_k$. Setting $\tilde{x}=(T_k\otimes I_n)\bar x$, (\ref{averagesystem}) becomes
\begin{eqnarray}\label{transx}
\dot{\tilde x}=(I_N \otimes A-\bar\Lambda_k \otimes BK) \tilde x.
\end{eqnarray}
The stability of (\ref{transx}) is equivalent to stability of its diagonal system
\begin{eqnarray}\label{diasystem}
\dot{\tilde x}=[I_N \otimes A-diag(\bar\lambda_1^k,\cdots, \bar\lambda_N^k) \otimes BK] \tilde x,
\end{eqnarray}
or equivalent to the stability of the following $N$ systems
\begin{eqnarray}
\dot{\tilde x}_i=(A-\bar\lambda_i^kBB^TP)\tilde x_i, \quad i=1,\cdots, N.
\end{eqnarray}
Denoting $\bar\lambda_i^k=\bar\mu_i^k+\jmath \bar \nu_i^k$, where $\jmath^2=-1$, then
\begin{eqnarray*}
&&P(A-\bar\lambda_i^kBB^P)+(A-\bar\lambda_i^kBB^TP)^*P\\
&=&P[A-(\bar\mu_i^k+\jmath \bar\nu_i^k)BB^TP]+[A-(\bar\mu_i^k+\jmath \bar\nu_i^k)BB^P]^*P\\
&=&PA+A^TP-2\bar\mu_i^k PBB^TP\\
&\leq &PA+A^TP-2 \delta PBB^TP\\
&\leq& -I<0
\end{eqnarray*}
Therefore system (\ref{averagesystem}) is globally asymptotically stable for each $k=0,1,\cdots$.
Using Lemma \ref{lemma2}, we conclude that there exists a positive $\alpha^*$ dependent of $T$, such that $\forall \alpha > \alpha^*$, the switching system
\begin{eqnarray} \label{scale}
\dot \varepsilon(t) = [I_N \otimes A-\mathcal{H}_{\sigma(\alpha t)}\otimes (BK)]\varepsilon(t)
\end{eqnarray}
is asymptotically stable.
According to Remark \ref{remark1}, $\alpha^*$ can be made smaller than one if we choose $T$ small enough.
Since $\alpha>\alpha^*$ is arbitrary, just pick $\alpha=1$.
That is, system (\ref{error}) is asymptotically stable, which implies that leader-following consensus is achieved.
$\blacksquare$
Although the exact value of $\bar \delta$ is hard to obtain, this difficulty can be removed as follows. Noting that for two positive parameters $\bar{\delta}^* < \bar{\delta}$, if $P>0$ is a solution of (\ref{riccati2}) for parameter $\bar{\delta}^*$, then this $P$ is also a solution of (\ref{riccati2}) for parameter $\bar{\delta}$. Thus we can compute a positive definite matrix $P$ with a small enough parameter $\bar{\delta}^*$ which is obviously independent the global information. This treatment has an extra advantage that it make consensus control law really distributed since the feedback gain $K=B^TP$ does not include global information.
\begin{remark}
During each interval $[t_k,t_{k+1})$, the total dwell time of the $m_k$ graphs is upper bounded by a positive number $T$, which is required to be appropriately small to make $\alpha^*<1$. This means that the dwell time of each graph can not exceed a certain bound. However, in \cite{ni2010} the dwell time for each graph can be arbitrary since there $T$ is not constrained and can be chosen arbitrarily large.
\end{remark}
\begin{remark}
Note that in (\ref{scale}) the switching signal $\sigma(\alpha t)$ and state $\varepsilon(t)$ have different time scales, while our result is obtained for system (\ref{error}) with $\sigma( t)$ and $\varepsilon(t)$ have the same time scale, and thus the result in our paper is not limited to fast time switching case. This distinguishes this work from \cite{stilwell2006}.
\end{remark}
\begin{remark}
It can be seen that if $P>0$ is a solution to (\ref{riccati}), then $\kappa P, \kappa \geq 1$, is also a solution to (\ref{riccati}). Indeed,
$\kappa PA+\kappa A^TP-2 \kappa ^2 \bar \delta PBB^TP+\bar \delta I_n$
$= \kappa (PA+ A^TP-2 \kappa \bar \delta PBB^TP+1/\kappa I_n)$
$\leq \kappa (PA+ A^TP-2 \bar \delta PBB^TP+ I_n)<0$.
Therefore, $\kappa K$ is also a stabilizing feedback matrix and the $\kappa $ can be understood as the coupling strength.
\end{remark}
| 3,965 | 15,069 |
en
|
train
|
0.169.3
|
\section{Observer-Based Leader-Following Consensus}
This section extends the result in last section to observer-based leader-following consensus. Consider a multi-agent system consisting of $N$ agents and a leader.
The leader agent, labeled as $i=0$, has linear dynamics as
\begin{eqnarray}\label{leader}
\begin{array}{lllll}
\dot x_0=Ax_0,\\
y_0=Cx_0
\end{array}
\end{eqnarray}
where $y_0\in \mathbb{R}^p$ is the output of the leader.
The dynamics of each follower agent, labeled as $i\in \{1, \cdots, N\}$, is
\begin{eqnarray}\label{follower}
\begin{array}{llllll}
\dot x_i=Ax_i+Bu_i,\\
y_i=Cx_i
\end{array}
\end{eqnarray}
where $y_i\in \mathbb{R}^p$ is the agent $i$'s observed output information, and $u_i\in
\mathbb{R}^m$ is agent $i$'s input through which the interaction or coupling between other agents is realized.
More specifically, $u_i$ is a dynamical feedback of $z_i$.
In this section, we assume
\begin{assumption}\label{sta_det}
The pair (A,B) is stabilizable, and the pair (A,C) is detectable.
\end{assumption}
The observer-based feedback controller is represented as
\begin{eqnarray}\label{observerfeedback}
\begin{array}{llllll}
{\dot{\hat{\varepsilon}}}_i=A \hat{\varepsilon}_i+K_o(\hat z_i-z_i)+Bu_i,\\
u_i=F\hat {\varepsilon}_i,
\end{array}
\end{eqnarray}
where
\begin{eqnarray}\label{hatzi}
\hat z_i=\sum_{j\in \mathcal{N}_i}(C\hat{\varepsilon}_j-C\hat{\varepsilon}_i)+d_iC\hat{\varepsilon}_i,
\end{eqnarray}
and the matrices $K_o$ and $K$ are
to be designed later.
\begin{remark}
The term $z_i$ in (\ref{observerfeedback}) indicates that the observer receives the output variable
information from this agent's neighbors as input, and the term $\hat z_i$ indicates that this observer exchanges its state with its neighboring observers.
That is, each observer is implemented according
to its local sensing resources. Since $z_i$ and $\hat z_i$ are local, the observer is essentially distributed,
thus then feeding the state of each observer back to the
corresponding agent is again a distributed control scheme.
\end{remark}
By further introducing the following stacked vector
$\hat\varepsilon=(\hat\varepsilon_1^T, \cdots, \hat\varepsilon_N^T)^T$,
$\hat z=(\hat z_1^T, \cdots, \hat z_N^T)^T$, and by using the structure matrices of graph $ \bar{ \mathcal{G}}_{\sigma(t)}$, one has
\begin{eqnarray}\label{hateps}
\dot {\hat\varepsilon} &=& (I_N \otimes A)\hat\varepsilon-[\mathcal{L}_{\sigma(t)}\otimes (K_oC)] \hat\varepsilon-[\mathcal{D}_{\sigma(t)}\otimes (K_oC)] \hat\varepsilon + \nonumber\\
&& \hspace{3cm} [\mathcal{L}_{\sigma(t)}\otimes (K_oC)]\varepsilon+ [\mathcal{D}_{\sigma(t)}\otimes (K_oC)] \varepsilon +(I_N \otimes B)u \nonumber\\
&=& [I_N \otimes (A+BF)-\mathcal{H}_{\sigma(t)}\otimes (K_oC)]\hat\varepsilon +[\mathcal{H}_{\sigma(t)}\otimes (K_oC)][\varepsilon.
\end{eqnarray}
Then
\begin{eqnarray}\label{close}
\begin{array}{llll}
\dot { \varepsilon}=(I_N\otimes A)\varepsilon+ [I_N \otimes (BF)]\hat{\varepsilon}\\
\dot{\hat{\varepsilon}}=[\mathcal{H}_{\sigma(t)}\otimes (K_oC)]\varepsilon+[I_N\otimes A+I_N\otimes (BF)-\mathcal{H}_{\sigma(t)}\otimes (K_oC)]\hat{\varepsilon}
\end{array}
\end{eqnarray}
Let $e=\hat{\varepsilon}-\varepsilon$, that is
\begin{eqnarray*}
\left(
\begin{array}{c}
\varepsilon \\
e \\
\end{array}
\right)
=
\left(
\begin{array}{cc}
I_{nN} & 0 \\
-I_{nN} & I_{nN} \\
\end{array}
\right)
\left(
\begin{array}{c}
\varepsilon \\
\hat{\varepsilon} \\
\end{array}
\right).
\end{eqnarray*}
Under this coordinate transformation, system (\ref{close}) becomes
\begin{eqnarray}\label{system_xe}
\left(
\begin{array}{c}
\dot \varepsilon \\
\dot e \\
\end{array}
\right)
=
\left(
\begin{array}{cc}
I_N \otimes (A+BF) & I_N \otimes BF \\
0 & I_N \otimes A-\mathcal{H}_{\sigma(t)}\otimes (K_oC)\\
\end{array}
\right)
\left(
\begin{array}{c}
\varepsilon \\
e \\
\end{array}
\right).
\end{eqnarray}
Therefore, observer-based leader-following consensus consists in, under jointly connected graph condition, designing matrices $K_o$ and $F$ such that system (\ref{system_xe}) is asymptotically stable.
By separate principle and by referring to Lemma \ref{lemma4}, system (\ref{system_xe}) can by made asymptotically stable through carrying out the following two-procedure design:
\begin{itemize}
\item Design matrix $K_o$ such the switched system $\dot e =[I_N \otimes A-\mathcal{H}_{\sigma(t)}\otimes (K_oC)]e$ is asymptotically stable;
\item Design matrix $F$ such the $\dot \varepsilon =[I_N \otimes (A+BF)]\varepsilon$ is asymptotically stable;
\end{itemize}
The first step can be realized by referring Theorem \ref{theorem2}, replacing the pair $(A, B)$ with $(A^T,C^T)$.
The second step is a standard state feedback control problem.
We summarize above analysis in the following theorem. The rest of its proof is essentially similar
to that of Theorem \ref{theorem2}, and is thus omitted for save of space.
\begin{theorem}\label{theorem4}
Consider the multi-agent systems (\ref{leader}-\ref{follower}) associated with switching graphs $\bar{\mathcal{G}}_{\sigma(t)}$ under the Assumptions \ref{jc},
\ref{sta_det} with $T$ small enough. Let $P>0$ be a solution to the Riccati inequality
\begin{eqnarray}\label{riccati}
PA^T+AP-2\bar \delta PC^TCP+I_n<0,
\end{eqnarray}
then under the control law {\rm(\ref{observerfeedback})} with $K_o=PC^T$ and $F$ being such that $A+BF$ is Hurwitz,
all the agents follow the leader from any initial conditions.
\end{theorem}
\section{Simulation Results}
In this section, we give two examples to illustrate the validity
of the results. Consider a multi-agent system consisting of a leader and four
agents. Assume the system matrices are
\begin{eqnarray*}
A=\left(
\begin{array}{ccc}
0.5548 & -0.5397 & -0.0757\\
0.3279 & -0.0678 & -0.4495\\
-0.0956 & -0.6640 & 0.0130
\end{array}
\right),
B=\left(
\begin{array}{cc}
3 & 5\\
3 & -2\\
-8 & -8
\end{array}
\right),
C=\left(
\begin{array}{ccc}
1 & -1 & 2\\
-4 & 2 & -3
\end{array}
\right)
\end{eqnarray*}
We suppose that possible
interaction graphs are
$\{\bar{G}_1,\bar{G}_2,\bar{G}_3,\bar{G}_4,\bar{G}_5,\bar{G}_6\}$
which are shown in Figure {\rm\ref{topology}}, and the
interaction graphs are switching as
$\bar{G}_1\rightarrow\bar{G}_2\rightarrow\bar{G}_3\rightarrow\bar{G}_4
\rightarrow\bar{G}_5\rightarrow\bar{G}_6\rightarrow
\bar{G}_1\rightarrow \cdots $, and each graph is active for $1/2$
second. Since the graphs $\bar{G_1}\cup \bar{G_2}\cup
\bar{G_3}$ and $\bar{G_4}\cup \bar{G_5}\cup \bar{G_6}$ are
connected, we can choose $t_k=k, t_{k+1}=k+3/2$ and $t_k^0=k,
t_k^1=k+1/2, t_k^2=k+2/2, t_k^3=k+3/2$ with $k=0,1\cdots$.
We choose a small parameter
$\bar \delta=1/3 min(0.3820, 0.1732)=0.0057$. The matrices $K$ in Theorem \ref{theorem2} and $K_O, F$ Theorem \ref{theorem4} are calculated as
\begin{eqnarray*}
K=\left(
\begin{array}{ccc}
0.7520 & 5.9852 & -2.7041\\
12.6966 & -3.8441 & 1.6419
\end{array}
\right)
\end{eqnarray*}
and
\begin{eqnarray*}
F=\left(
\begin{array}{ccc}
0.6338 & -0.5087 & 0.3731\\
-0.9077 & 0.4509 & -0.1938
\end{array}
\right),
K_O=\left(
\begin{array}{ccc}
-6.7092 & -9.1532\\
-9.6111 & 4.1353\\
7.7514 & -1.1756
\end{array}
\right),
\end{eqnarray*}
respectively.
With the same initial condition, the simulation results of Theorem 2 and Theorem 4 are shown in Figure \ref{figth2}
and Figure \ref{figth4}, respectively.
| 2,850 | 15,069 |
en
|
train
|
0.169.4
|
\section{Simulation Results}
In this section, we give two examples to illustrate the validity
of the results. Consider a multi-agent system consisting of a leader and four
agents. Assume the system matrices are
\begin{eqnarray*}
A=\left(
\begin{array}{ccc}
0.5548 & -0.5397 & -0.0757\\
0.3279 & -0.0678 & -0.4495\\
-0.0956 & -0.6640 & 0.0130
\end{array}
\right),
B=\left(
\begin{array}{cc}
3 & 5\\
3 & -2\\
-8 & -8
\end{array}
\right),
C=\left(
\begin{array}{ccc}
1 & -1 & 2\\
-4 & 2 & -3
\end{array}
\right)
\end{eqnarray*}
We suppose that possible
interaction graphs are
$\{\bar{G}_1,\bar{G}_2,\bar{G}_3,\bar{G}_4,\bar{G}_5,\bar{G}_6\}$
which are shown in Figure {\rm\ref{topology}}, and the
interaction graphs are switching as
$\bar{G}_1\rightarrow\bar{G}_2\rightarrow\bar{G}_3\rightarrow\bar{G}_4
\rightarrow\bar{G}_5\rightarrow\bar{G}_6\rightarrow
\bar{G}_1\rightarrow \cdots $, and each graph is active for $1/2$
second. Since the graphs $\bar{G_1}\cup \bar{G_2}\cup
\bar{G_3}$ and $\bar{G_4}\cup \bar{G_5}\cup \bar{G_6}$ are
connected, we can choose $t_k=k, t_{k+1}=k+3/2$ and $t_k^0=k,
t_k^1=k+1/2, t_k^2=k+2/2, t_k^3=k+3/2$ with $k=0,1\cdots$.
We choose a small parameter
$\bar \delta=1/3 min(0.3820, 0.1732)=0.0057$. The matrices $K$ in Theorem \ref{theorem2} and $K_O, F$ Theorem \ref{theorem4} are calculated as
\begin{eqnarray*}
K=\left(
\begin{array}{ccc}
0.7520 & 5.9852 & -2.7041\\
12.6966 & -3.8441 & 1.6419
\end{array}
\right)
\end{eqnarray*}
and
\begin{eqnarray*}
F=\left(
\begin{array}{ccc}
0.6338 & -0.5087 & 0.3731\\
-0.9077 & 0.4509 & -0.1938
\end{array}
\right),
K_O=\left(
\begin{array}{ccc}
-6.7092 & -9.1532\\
-9.6111 & 4.1353\\
7.7514 & -1.1756
\end{array}
\right),
\end{eqnarray*}
respectively.
With the same initial condition, the simulation results of Theorem 2 and Theorem 4 are shown in Figure \ref{figth2}
and Figure \ref{figth4}, respectively.
\section{Conclusions}
This paper presents an averaging approach to leader-following consensus problem of multi-agent with linear dynamics, without imposing neutrally stable condition on agent dynamics.
The interaction topology is switching.
The proposed protocols force the follower agents to follow
the independent leader trajectory.
The result is extended to observer-based protocols design.
Such design can be separated as a two-step procedure: design asymptotically stable distributed
observers and asymptotically stable observer-state-feedback protocols.
\section*{Appendix}
The appendix is devoted to the proof of Lemma \ref{lemma1}. To this end, we first cite the following result.
\begin{lemma}(Ger\v{s}gorin)\cite{horn1985}
For any matrix $G=[g_{ij}]\in \mathbb{R}^{N\times N}$, all the eigenvalues of $G$ are located in the union of $N$ Ger\v{s}gorin
discs
\begin{eqnarray*}
Ger(G):=\cup_{i=1}^{N}\{z\in C: |z-g_{ii}|\leq \sum_{j\neq i}|g_{ij}|\}.
\end{eqnarray*}
\end{lemma}
The definition of weighted graph will also be used in the proof of what follows.
If we assign each edge $(i,j)$ of graph $\bar{\mathcal{G}}$ a weight $w_{ij}$, we obtain a weighted graph $\bar{\mathcal{G}}_{\mathcal W}=(\bar{\mathcal{V}}, \bar{\mathcal{E}},\bar{\mathcal W})$, where $\bar{\mathcal{W}}=[w_{ij}]$.
For an graph $\bar{\mathcal{G}}$ and any positive number $k>0$, the graph $k\bar{\mathcal{G}}$ is defined to be a weighted graph obtained from $\bar{\mathcal{G}}$ by assigning a weight $k$ to each existing edge of $\bar{\mathcal{G}}$.
For two graphs $\bar{\mathcal{G}}_1$ and $\bar{\mathcal{G}}_2$, their union is a weighted graph and the weight
for edge $(i,j)$ is the sum of weights for the two edges $(i,j)$ in the graphs $\bar{\mathcal{G}}_1$ and $\bar{\mathcal{G}}_2$ respectively.
The weighted Laplacian of graph $\bar{\mathcal{G}}_{_{\mathcal{W}}}$
is defined as $\bar{\mathcal{L}}_{_{\mathcal{W}}}=-\bar{\mathcal{A}}_{_{\mathcal{W}}}+\bar{\Lambda_{_{\mathcal{W}}}}$, where $\bar{\mathcal{A}}_{_{\mathcal{W}}}=[w_{ij}a_{ij}]$ and $\bar{\Lambda_{_{\mathcal{W}}}}(i,i)=\sum_{j\neq}w_{ij}a_{ij}$;
the weighted structure matrix of graph $\bar{\mathcal{G}}_{_{\mathcal{W}}}$
is defined as $\mathcal{H}_{_{\mathcal{W}}}=\mathcal{L}_{_{\mathcal{W}}}+\mathcal{D}_{_{\mathcal{W}}}$, where
$\mathcal{L}_{_{\mathcal{W}}}$ is the weighted Laplacian of the subgraph $\mathcal{G}_{_{\mathcal{W}}}$ of $\bar{\mathcal{G}}_{_{\mathcal{W}}}$, and
$\mathcal{D}_{_{\mathcal{W}}}=diag(w_{_{01}}d_1,\cdots, w_{_{0N}}d_N)$.
{\bf Proof of Lemma \ref{lemma1}:} (1) Denote the Laplacian matrix and the structure matrix of graph $\bar{\mathcal{G}}_i$ by $\bar{\mathcal{L}}_i$ and $\mathcal{H}_i$ respectively, and denote the Laplacian matrix of graph $\mathcal{G}_i$ by $\mathcal{L}_i$.
We first prove the case when $m=1$. By definitions, it can be easily verified the following relationship
\begin{eqnarray*}
\bar{\mathcal{L}}_1=\begin{pmat}({|..})
0 & 0 & \cdots & 0 \cr\-
-d_1 & & & \cr
\vdots & & \mathcal{H}_1& \cr
-d_N & & & \cr
\end{pmat}.
\end{eqnarray*}
Since the graph $\bar{\mathcal{G}}_1$ is connected, then $rank(\bar{\mathcal{L}}_1)=N$ \cite{ren2005}. Thus the sub-matrix $\bar{\mathcal {M}}_1$ formed by the last $N$ rows of $\bar{\mathcal{L}}_1$ has rank $N$.
Note that
\begin{eqnarray*}
\left(
\begin{array}{c}
-d_1 \\
\vdots \\
-d_N \\
\end{array}
\right)
=
\mathcal {D}_1
\left(
\begin{array}{c}
1 \\
\vdots \\
1 \\
\end{array}
\right)
=
\mathcal {L}_1\left(
\begin{array}{c}
1 \\
\vdots \\
1 \\
\end{array}
\right)
+
\mathcal {D}_1
\left(
\begin{array}{c}
1 \\
\vdots \\
1 \\
\end{array}
\right)
=
\mathcal {H}_1
\left(
\begin{array}{c}
1 \\
\vdots \\
1 \\
\end{array}
\right),
\end{eqnarray*}
that is, the first column of matrix $\bar{\mathcal {M}}_1$ is a linear combination of its last $N$ columns.
Therefore, $rank(\mathcal {H}_1)=N$. Furthermore, we claim the eigenvalues of the matrix $\mathcal {H}_1$ are located in closed-right half plan; indeed, from Ger\v{s}gorin theorem, all the eigenvalues of $H$ are located in
\begin{eqnarray*}
Ger(H)=\cup_{i=1}^{N}\left\{z\in \mathbb{C}: |z-l_{ii}-d_i|\leq |N_i|\right\},
\end{eqnarray*}
and therefore they are located in the closed-right half plan by noting that $l_{ii}=|N_i|$ and $d_i\geq 0$.
We thus conclude that all the eigenvalues of $\mathcal {H}_1$ have positive real parts.
We proceed to prove the case when $m>1$. Obviously, for a union $\bar{\mathcal{G}}_{_U}$ of a group of weighted graphs $\{\bar{\mathcal{G}}_1, \cdots, \bar{\mathcal{G}}_m\}$,
its weighted Laplacian matrix $\bar{\mathcal{L}}_{_U}$ is the sum of the Laplacian matrices $\{\bar{\mathcal{L}}_1, \cdots, \bar{\mathcal{L}}_m\}$ of graphs $\{\bar{\mathcal{G}}_1, \cdots, \bar{\mathcal{G}}_m\}$, and
\begin{eqnarray*}
\bar{\mathcal{L}}_U=\begin{pmat}({|..})
0 & 0 & \cdots & 0 \cr\-
-d_1^U & & & \cr
\vdots & \mathcal{H}_1+ & \cdots & +\mathcal{H}_m \cr
-d_N^U & & & \cr
\end{pmat},
\end{eqnarray*}
where
\begin{eqnarray*}
\left(
\begin{array}{c}
-d_1^U \\
\vdots \\
-d_N^U \\
\end{array}
\right)=
\left(
\begin{array}{c}
-d_1^1 \\
\vdots \\
-d_N^1 \\
\end{array}
\right)+\cdots+
\left(
\begin{array}{c}
-d_1^m \\
\vdots \\
-d_N^m \\
\end{array}
\right)
\end{eqnarray*}
with $(d_1^j, d_2^j, \cdots, d_N^j)^T$ be the diagonal elements of matrix $\mathcal {D}_j, j=1,\cdots, m$.
When the graphs are jointly connected, that is, when the $\bar{\mathcal{G}}_{_U}$ is connected, the matrix
$\bar{\mathcal{L}}_{_U}$ has a simple zero eigenvalue. Argue in a manner similar to that of $m=1$ case, it can be shown that the all the eigenvalues of the matrix $\mathcal{H}_1+ \cdots +\mathcal{H}_m$ have positive real parts.
(2) Similar discussion as given in (1) for the weighted graphs $\tau_1 \bar{\mathcal{G}}_1, \cdots, \tau_N \bar{\mathcal{G}}_N$ yields the conclusion.
\begin{figure}
\caption{Six possible interaction topologies between the leader and the agents.}
\label{topology}
\end{figure}
\begin{figure}
\caption{Simulation for Theorem \ref{theorem2}
\label{figth2}
\end{figure}
\begin{figure}
\caption{Simulation for Theorem \ref{theorem4}
\label{figth4}
\end{figure}
\makecontacts
\end{document}
| 3,223 | 15,069 |
en
|
train
|
0.170.0
|
\begin{document}
\title{Exploiting algebraic structure in global optimization and the Belgian chocolate problem}
\begin{abstract}
The Belgian chocolate problem involves maximizing a parameter $\delta$ over a non-convex region of polynomials. In this paper we detail a global optimization method for this problem that outperforms previous such methods by exploiting underlying algebraic structure. Previous work has focused on iterative methods that, due to the complicated non-convex feasible region, may require many iterations or result in non-optimal $\delta$. By contrast, our method locates the largest known value of $\delta$ in a non-iterative manner. We do this by using the algebraic structure to go directly to large limiting values, reducing the problem to a simpler combinatorial optimization problem. While these limiting values are not necessarily feasible, we give an explicit algorithm for arbitrarily approximating them by feasible $\delta$.
Using this approach, we find the largest known value of $\delta$ to date, $\delta = 0.9808348$.
We also demonstrate that in low degree settings, our method recovers previously known upper bounds on $\delta$ and that prior methods converge towards the $\delta$ we find.
\end{abstract}
\section{Introduction}\label{sec:intro}
Global optimization problems of practical interest can often be cast as optimization programs over non-convex feasible regions. Unfortunately, iterative optimization over such regions may require large numbers of iterations and result in non-global maxima. Finding all or even many critical points of such programs is generally an arduous, computationally expensive task.
In this paper we show that by exploiting the underlying algebraic structure, we can directly find the largest known values of the Belgian chocolate problem, a famous open problem bridging optimization and control theory. Moreover, this algebraic method does not require any iterative approach. Instead of relying on eventual convergence, our method algebraically identifies points that provide the largest value of the Belgian chocolate problem so far.
While this approach may seem foreign to the reader, we will show that our algebraic optimization method outperforms prior global optimization methods for solving the Belgian chocolate problem. We will contrast our method with the optimization method of Chang and Sahinidis \cite{chang2007global} in particular. Their method used iterative branch-and-reduce techniques \cite{ryoo1996branch} to find what was the largest known value of $\delta$ until our new approach. Due to the complicated feasible region, their method may take huge numbers of iterations or converge to suboptimal points. Our method eliminates the need for these expensive iterative computations by locating and jumping directly to the larger values of $\delta$. This approach has two primary benefits over \cite{chang2007global}. First, it allows us to more efficiently find $\delta$ as we can bypass the expensive iterative computations. This also allows us to extend our approach to cases that were not computationally tractable for \cite{chang2007global}. Second, our approach allows us to produce larger values of $\delta$ by finding a finite set of structured limit points. In low-degree cases, this set provably contains the supremum of the problem, while in higher degree cases, the set contains larger values of $\delta$ than found in \cite{chang2007global}.
The Belgian chocolate problem is a famous open problem in control theory proposed by Blondel in 1994. In the language of control theory, Blondel wanted to determine the largest value of a process parameter for which stabilization of an unstable plant could be achieved by a stable minimum-phase controller \cite{blondel1994simultaneous}. Blondel designed the plant to be a low-degree system that was resistant to known stabilization methods, in the hope that a solution would lead to development of new stabilization techniques. Specifically, Blondel wanted to determine the largest value of $\delta > 0$ for which the transfer function $P(s) = (s^2-1)/(s^2-2\delta s+1)$ can be stabilized by a proper, bistable controller.
For readers unfamiliar with control theory, this problem can be stated in simple algebraic terms. To do so, we will require the notion of a {\it stable} polynomial. A polynomial is stable if all its roots have negative real part. The Belgian chocolate problem is then as follows.\\
\\
\noindent{\bf Belgian chocolate problem:} Determine for which $\delta > 0$ there exist real, stable polynomials $x(s), y(s), z(s)$ with $\deg (x) \geq \deg (y)$ satisfying
\begin{equation}\label{bcp}
z(s) = (s^2-2\delta s+1)x(s)+(s^2-1)y(s).\end{equation}
We call such $\delta$ {\it admissible}. In general, stability of $x,y,z$ becomes harder to achieve the larger $\delta$ is. Therefore, we are primarily interested in the supremum of all admissible $\delta$. If we fix a maximum degree $n$ for $x$ and $y$, then this gives us the following global optimization problem for each $n$.\\
\\
\noindent{\bf Belgian chocolate problem} (optimization version):
\begin{equation}\label{bcp_opt}
\begin{aligned}
& \underset{\delta, x(s), y(s)}{\text{maximize}}
& & \delta \\
& \text{subject to} & & x, y, z\text{ are stable},\\
& & & z(s) = (s^2-2\delta s+1)x(s)+(s^2-1)y(s),\\
& & & \deg(y) \leq \deg(x) \leq n.
\end{aligned}
\end{equation}
Note that we can view a degree $n$ polynomial with real coefficients as a $(n+1)$-dimensional real vector of its coefficients. Under this viewpoint, the space of polynomials $x, y, z$ that are stable and satisfy (\text{Re}f{bcp}) is an extremely complicated non-convex space. As a result, it is difficult to employ global optimization methods directly to this problem. The formulation above does suggest an undercurrent of algebra in this problem. This will be exploited to transform the problem into a combinatorial optimization problem by finding points that are essentially local optima.
Previous work has employed various optimization methods to find even larger admissible $\delta$.
Patel et al.~\cite{patel2002some} were the first to show that $\delta = 0.9$ is admissible by $x, y$ of degree at most 11, answering a long-standing question of Blondel.
They further showed that $\delta = 0.93720712277$ is admissible. In 2005, Burke et al. \cite{burke2005analysis} showed that $\delta = 0.9$ is admissible with $x,y$ of degree at most 3. They also improved the record to $\delta = 0.94375$ using gradient sampling techniques. In 2007, Chang and Sahinidis used branch-and-reduce techniques to find admissible $\delta$ as large as $0.973974$ \cite{chang2007global}. In 2012, Boston used algebraic techniques to give examples of admissible $\delta$ up to 0.97646152 \cite{boston2012belgian}. Boston found polynomials that are almost stable and satisfy (\text{Re}f{bcp}). Boston then used ad hoc methods to perturb these to find stable $x,y,z$ satisfying (\text{Re}f{bcp}). While effective, no systematic method for perturbing these polynomials to find stable ones was given.
In this paper, we extend the approach used by Boston in 2012 \cite{boston2012belgian} to achieve the largest known value of $\delta$ so far. We will refer to this method as the method of {\it algebraic specification}. We show that these almost stable polynomials serve as limiting values of the optimization program. Empirically, these almost stable polynomials achieve the supremum over all feasible $\delta$. Furthermore, we give a theoretically rigorous method for perturbing the almost stable polynomials produced by algebraic specification to obtain stable polynomials. Our approach shows that all $\delta \leq 0.9808348$ are admissible. This gives the largest known admissible value of $\delta$ to date. We further show that previous global optimization methods are tending towards the limiting values of $\delta$ found via our optimization method.
We do not assume any familiarity on the reader's part with the algebra and control theory and will introduce all relevant notions. While we focus on the Belgian chocolate problem throughout the paper, we emphasize that the general theme of this paper concerns the underlying optimization program. We aim to illustrate that by considering the algebraic structure contained within an optimization problem, we can develop better global optimization methods.
\section{Motivation for our approach}\label{motivation}\label{sec:motivation}
In order to explain our approach, we will discuss previous approaches to the Belgian chocolate problem in more detail. Such approaches typically perform iterative non-convex optimization in the space of stable controllers in order to maximize $\delta$. In \cite{chang2007global}, Chang and Sahinidis formulated, for each $n$, a non-convex optimization program that sought to maximize $\delta$ subject to the polynomials $x, y, (s^2-2\delta s +1)x+(s^2-1)y$ being stable and such that $n \geq \deg(x) \geq \deg(y)$. For notational convenience, we will always define $z = (s^2-2\delta s + 1)x+(s^2-1)y$. Chang and Sahinidis used branch-and-reduce techniques to attack this problem for $n$ up to 10.
Examining the roots of the $x,y,z$ they found for $\deg(x) = 6,8,10$, a pattern emerges. Almost all the roots of these polynomials are close to the imaginary axis and are close to a few other roots. In fact, most of these roots have real part in the interval $(-0.01,0)$. In other words, the $x,y,z$ are approximated by polynomials with many repeated roots on the imaginary axis. It is also worth noting that the only roots of $x$ that were omitted are very close to $-\delta \pm \sqrt{\delta^2-1}$. This suggests that $x$ should have a factor close to $(s^2+2\delta s+1)$.
This suggests the following approach. Instead of using non-convex optimization to iteratively push $x,y,z$ towards polynomials possessing repeated roots on the imaginary axis, we will algebraically construct polynomials with this property. This will allow us to immediately find large limit points of the optimization problem in (\text{Re}f{bcp_opt}). While the $x,y,z$ we construct are not stable, they are close to being stable. We will show later that we can perturb $x,y,z$ and thereby push their roots just to the left of the imaginary axis, causing them to be stable. This occurs at the expense of decreasing $\delta$ by an arbitrarily small amount.
Our method only requires examining finitely many such limit points. Moreover, for reasonable degrees of $x$ and $y$, these limit points can be found relatively efficiently. By simply checking each of these limit points, we reduce to a combinatorial optimization problem. This combinatorial optimization problem provably achieves the supremal values of $\delta$ for $\deg(x) \leq 4$. For higher degree $x$, our method finds larger values of $\delta$ than any previous optimization method thus far. In the sections below we will further explain and motivate our approach, and show how this leads to the largest admissible $\delta$ found up to this point.
| 2,847 | 15,323 |
en
|
train
|
0.170.1
|
\section{Motivation for our approach}\label{motivation}\label{sec:motivation}
In order to explain our approach, we will discuss previous approaches to the Belgian chocolate problem in more detail. Such approaches typically perform iterative non-convex optimization in the space of stable controllers in order to maximize $\delta$. In \cite{chang2007global}, Chang and Sahinidis formulated, for each $n$, a non-convex optimization program that sought to maximize $\delta$ subject to the polynomials $x, y, (s^2-2\delta s +1)x+(s^2-1)y$ being stable and such that $n \geq \deg(x) \geq \deg(y)$. For notational convenience, we will always define $z = (s^2-2\delta s + 1)x+(s^2-1)y$. Chang and Sahinidis used branch-and-reduce techniques to attack this problem for $n$ up to 10.
Examining the roots of the $x,y,z$ they found for $\deg(x) = 6,8,10$, a pattern emerges. Almost all the roots of these polynomials are close to the imaginary axis and are close to a few other roots. In fact, most of these roots have real part in the interval $(-0.01,0)$. In other words, the $x,y,z$ are approximated by polynomials with many repeated roots on the imaginary axis. It is also worth noting that the only roots of $x$ that were omitted are very close to $-\delta \pm \sqrt{\delta^2-1}$. This suggests that $x$ should have a factor close to $(s^2+2\delta s+1)$.
This suggests the following approach. Instead of using non-convex optimization to iteratively push $x,y,z$ towards polynomials possessing repeated roots on the imaginary axis, we will algebraically construct polynomials with this property. This will allow us to immediately find large limit points of the optimization problem in (\text{Re}f{bcp_opt}). While the $x,y,z$ we construct are not stable, they are close to being stable. We will show later that we can perturb $x,y,z$ and thereby push their roots just to the left of the imaginary axis, causing them to be stable. This occurs at the expense of decreasing $\delta$ by an arbitrarily small amount.
Our method only requires examining finitely many such limit points. Moreover, for reasonable degrees of $x$ and $y$, these limit points can be found relatively efficiently. By simply checking each of these limit points, we reduce to a combinatorial optimization problem. This combinatorial optimization problem provably achieves the supremal values of $\delta$ for $\deg(x) \leq 4$. For higher degree $x$, our method finds larger values of $\delta$ than any previous optimization method thus far. In the sections below we will further explain and motivate our approach, and show how this leads to the largest admissible $\delta$ found up to this point.
\section{Main results}\label{sec:math_back}
\subsection{Preliminaries}
Given $t \in \mathbb{C}$, we let $\text{Re}(t)$ denote its real part. We will let $\mathbb{R}[s]$ denote the set of polynomials in $s$ with real coefficients. For $p(s) \in \mathbb{R}[s]$, we call $p(s)$ {\it stable} if every root $t$ of $p$ satisfies $\text{Re}(t) < 0$. We let $H$ denote the set of all stable polynomials in $\mathbb{R}[s]$. We call $p(s)$ {\it quasi-stable} if every root $t$ of $p$ satisfies $\text{Re}(t) \leq 0$. We let $\overline{H}$ denote the set of quasi-stable polynomials of $\mathbb{R}[s]$. We let $H^m, \overline{H^m}$ denote the sets of stable and quasi-stable polynomials respectively of degree at most $m$.
\begin{definition}We call $\delta$ {\it admissible} if there exist $x,y \in H$ such that $\deg(x) \geq \deg(y)$ and
\begin{equation}
(s^2-2\delta s+1)x(s) + (s^2-1)y(s) \in H.\end{equation}\end{definition}
\begin{definition}We call $\delta$ {\it quasi-admissible} if there exist $x,y \in \overline{H}$ such that $\deg(x) \geq \deg(y)$ and
\begin{equation}
(s^2-2\delta s+1)x(s) + (s^2-1)y(s) \in \overline{H}.\end{equation}\end{definition}
Note that since quasi-stability is weaker than stability, quasi-admissibility is weaker than admissibility. Our main theorem (Theorem \text{Re}f{main_thm} below) will show that if $\delta$ is quasi-admissible, then all smaller $\delta$ are admissible. Note that this implies that the Belgian chocolate problem is equivalent to finding the supremum of all admissible $\delta$. We will then find quasi-admissible $\delta$ in order to establish which $\delta$ are admissible. This is the core of our approach. These quasi-admissible $\delta$ are easily identified and are limit points of admissible $\delta$.
In practice, one verifies stability by using the Routh-Hurwitz criteria. Suppose we have a polynomial $p(s) = a_0s^n + a_1s^{n-1} + \ldots + a_{n-1}s + a_n \in \mathbb{R}[s]$ such that $a_0 > 0$. Then we define the $n\times n$ {\it Hurwitz matrix} $A(p)$ as
$$A(p) = \begin{pmatrix}
a_1 & a_3 & a_5 & \ldots & \ldots & 0 & 0\\
a_0 & a_2 & a_6 & \ldots & \ldots & 0 & 0\\
0 & a_1 & a_3 & \ldots & \ldots & 0 & 0\\
0 & a_0 & a_2 & \ldots & \ldots & 0 & 0\\
\vdots & \vdots & \vdots & \ddots & \ddots & \vdots & \vdots\\
0 & 0 & 0 & \ldots & \ldots & a_{n-2} & a_n\end{pmatrix}.$$
Adolf Hurwitz showed that a real polynomial $p$ with positive leading coefficient is stable if and only if all leading principal minors of $A(p)$ are positive.
While it may seem natural to conjecture that $p$ is quasi-stable if and only if all leading principal minors are nonnegative, this only works in one direction.
\begin{lemma}Suppose $p$ is a real polynomial with positive leading coefficient. If $p$ is quasi-stable then all the leading principal minors of $A(p)$ are nonnegative.\end{lemma}
\begin{proof}If $p(s)$ is quasi-stable, then for all $\epsilon > 0$, $p(s+\epsilon)$ is stable. Therefore, for all $\epsilon > 0$, the leading minors of $A(p(s+\epsilon))$ are all positive. Note that
$$\lim_{\epsilon \to 0} A(p(s+\epsilon)) = A(p).$$
Since the minors of a matrix are expressible as polynomial functions of the entries of the matrix, the leading principal minors of $A$ are limits of positive real numbers. They are therefore nonnegative.\end{proof}
To see that the converse doesn't hold, consider $p(s) = s^4 + 198s^2 + 101^2$. Its Hurwitz matrix has nonnegative leading principal minors, but $p$ is not quasi-stable. This example, as well as a more complete characterization of quasi-stability given below, can be found in \cite{asner1970total}. In particular, it is shown in \cite{asner1970total} that a real polynomial $p$ with positive leading coefficient is quasi-stable if and only if for all $\epsilon > 0$, $A(p(s+\epsilon))$ has positive leading principal minors.
\subsection{Quasi-admissible and admissible $\delta$}\label{delta_theory}
We first present the following theorem concerning which $\delta$ are admissible. We will defer the proof until later as it is a simple corollary to a stronger theorem about approximating polynomials in $\overline{H}$ by polynomials in $H$.
\begin{theorem}\label{delta_prop}If $\delta$ is admissible then all $\hat{\delta} < \delta$ are also admissible.\end{theorem}
For $\delta = 1$, note that the Belgian chocolate problem reduces to whether there are $x,y \in H$ with $\deg(x) \geq \deg(y)$ such that $(s-1)^2x + (s^2-1)y \in H$. This cannot occur for non-zero $x,y$ since $(s-1)^2x + (s^2-1)y$ has a root at $s = 1$. Theorem \text{Re}f{delta_prop} then implies that any $\delta \geq 1$ is not admissible. In 2012, Bergweiler and Eremenko showed that any admissible $\delta$ must satisfy $\delta < 0.999579$ \cite{bergweiler2013gol}.
On the other hand, if we fix $x,y$ then there is no single largest admissible $\delta$ associated to $x,y$. Standard results from control theory show that if $\delta$ is admissible by $x, y$ then for $\epsilon$ small enough, $\delta+\epsilon$ is admissible by the same polynomials.
Therefore, supremum $\delta^*$ over all admissible $\delta$ will not be associated to stable $x,y$. From an optimization point of view, the associated optimization program in (\text{Re}f{bcp_opt}) has an open feasible region. In particular, the set of admissible $\delta$ for (\text{Re}f{bcp_opt}) is of the form $(0,\delta_n^*)$ for some $\delta_n^*$ that is not admissible by $x,y$ of degree at most $n$. However, as we will later demonstrate, quasi-admissible $\delta$ lie on the boundary of this feasible region. Moreover, quasi-admissible $\delta$ naturally serve as analogues of local maxima. We will therefore find quasi-admissible $\delta$ and use these to find admissible $\delta$. In Section \text{Re}f{sec:approx} we will prove the following theorem relating admissible and quasi-admissible $\delta$. The following is the main theorem of our work and demonstrates the utility of searching for quasi-admissible $\delta$.
\begin{theorem}\label{main_thm}If $\delta$ is quasi-admissible, then all $\hat{\delta} < \delta$ are admissible. Moreover, if $\delta$ is quasi-admissible by quasi-stable $x,y$ of degree at most $n$, then any $\hat{\delta} < \delta$ is admissible by stable $\hat{x}, \hat{y}$ of degree at most $n$. \end{theorem}
This theorem shows that to find admissible $\delta$, we need only to find quasi-admissible $\delta$. In fact our theorem will show that if $\delta$ is quasi-admissible via $x,y$ of degree at most $n$, then all $\hat{\delta} < \delta$ are admissible via $x,y$ of degree at most $n$ as well. In short, quasi-admissible $\delta$ serve as upper limit points of admissible $\delta$. Also note that since admissible implies quasi-admissible, Theorem \text{Re}f{main_thm} implies Theorem \text{Re}f{delta_prop}.
The proof of Theorem \text{Re}f{main_thm} will be deferred until Section \text{Re}f{sec:approx}. In fact, we will do more than just prove the theorem. We will given an explicit algoritm for approximating quasi-stable $\hat{\delta}$ by stable $\delta$ within any desired tolerance. We will also be able to use the techniques in Section \text{Re}f{sec:approx} to prove the following theorem showing that admissible $\delta$ are always smaller than some quasi-admissible $\delta$.
\begin{theorem}\label{rev_thm}If $\delta$ is admissible by $x,y$ of degree at most $n$ then there is some $\hat{\delta} > \delta$ that is quasi-admissible by $\hat{x}, \hat{y}$ of degree at most $n$. Moreover, this $\hat{\delta}$ is not admissible by these polynomials.\end{theorem}
In other words, for any admissible $\delta$, there is a larger $\hat{\delta}$ that is quasi-admissible but not necessarily admissible. Therefore, we can restrict to looking at polynomials $x,y,z$ with at least one root on the imaginary axis.
| 3,206 | 15,323 |
en
|
train
|
0.170.2
|
\section{Low degree examples}\label{low_degree_ex}
In this section we demonstrate that in low-degree settings, the supremum of all admissible $\delta$ in (\text{Re}f{bcp_opt}) is actually a quasi-admissible $\delta$. By looking at quasi-stable polynomials that are not stable, we can greatly reduce our search space and directly find the supremum of the optimization program in (\text{Re}f{bcp_opt}).
For small degrees of $x, y$, we will algebraically design quasi-stable polynomials that achieve previously known bounds on the Belgian chocolate problem in these degrees.
Burke et al. \cite{burke2005analysis} showed that for $x\in H^3, y \in H^0$, any admissible $\delta$ must satisfy $\delta < \sqrt{2+\sqrt{2}}/2$ and for $x \in H^4, y \in H^0$, $\delta$ must satisfy $\delta < \sqrt{10+2\sqrt{5}}/4$. He et al. \cite{guannan2007stabilization} later found $x \in H^4, y \in H^0$ admitting $\delta$ close to this bound.
In fact, these upper bounds on admissible $\delta$ are actually quasi-admissible $\delta$ that can be obtained in a straightforward manner. For example, suppose we restrict to $x$ of degree 3, $y$ of degree 0. Then for some $A, B, C, k \in \mathbb{R}$, we have
\begin{gather*}
x(s) = s^3+ As^2 + Bs + C\\
y(s) = k\end{gather*}
Instead of trying to find admissible $\delta$ using this $x$ and $y$, we will try to find quasi-admissible $\delta$. That is, we want $\delta$ such that
$$z(s) = (s^2-2\delta s + 1)x(s) + (s^2-1)y(s) \in \overline{H}.$$
In other words, this $z(s)$ can be quasi-stable instead of just stable. Note that $z(s)$ must be of degree 5. We will specify a form for $z(s)$ that ensures it is quasi-stable. Consider the case $z(s) = s^5$. This is clearly quasi-stable as its only roots are at $s = 0$. To ensure that $z(s) = s^5$ and equation (\text{Re}f{bcp}) holds, we require
\begin{gather*}
(s^2-2\delta s + 1)(s^3+ As^2 + Bs + C)+(s^2-1)k = s^5\end{gather*}
Equating coefficients gives us the following 5 equations in 5 unknowns.
\begin{gather*}
A-2\delta=0\\
-2A\delta + B + 1=0\\
A - 2B\delta + C + k=0\\
B-2C\delta=0\\
C-k=0\end{gather*}
In fact, ensuring that we have as many equations as unknowns was part of the motivation for letting $z(s) = s^5$. Solving for $A,B,C,k,\delta$, we find
\begin{gather*}
8\delta^4-8\delta^2+1=0\\
A = 2\delta\\
B = 4\delta^2-1\\
C = 4\delta^3-2\delta\\
k = 4\delta^3-2\delta\end{gather*}
Taking the largest real root of $8\delta^4-8\delta^2+1$ gives $\delta = \sqrt{2+\sqrt{2}}/2$. Taking $A,B,C,k$ as above yields polynomials $x, y, z$ with real coefficients. One can verify that $x$ is stable (via the Routh-Hurwitz test, for example), while $y$ is degree 0 and therefore stable. Note that since $z(s) = s^5, z$ is only quasi-stable. Therefore, there is $x \in H^3, y \in H^0$ for which $\sqrt{2+\sqrt{2}}/2$ is quasi-admissible. This immediately gives the limiting value for $x \in H^3, y \in H^0$ discovered by Burke et al \cite{burke2005analysis}. Combining this with Theorem \text{Re}f{main_thm}, we have shown the following theorem.
\begin{theorem}For $\deg(x) \leq 3$, $\delta = \frac{\sqrt{2+\sqrt{2}}}{2}$ is quasi-admissible and all $\delta < \frac{\sqrt{2+\sqrt{2}}}{2}$ are admissible.\end{theorem}
Next, suppose that $x$ has degree 4 and $y$ has degree 0. For $A, k, \delta \in \mathbb{R}$, define
\begin{gather*}
x(s) = (s^2+2\delta s + 1)(s^2+A)\\
y(s) = k\end{gather*}
Note that as long as $A \geq 0$, $x$ will be quasi-stable and $y$ will be stable for any $k$. As above, we want quasi-admissible $\delta$. We let $z(s) = s^6$, so that $z(s)$ is quasi-stable. Finding $A, \delta, k$ amounts to solving
\begin{gather*}
(s^2-2\delta s + 1)x(s) + (s^2-1)y(s) = z(s)\\
\Leftrightarrow (s^2-2\delta s + 1)(s^2+2\delta s+1)(s^2+A)+(s^2-1)k = s^6\\
\Leftrightarrow s^6 + (A - 4\delta^2 + 2)s^4 + (-4A\delta^2 + 2A + k + 1)s^2 + (A-k) = s^6
\end{gather*}
Note that the $(s^2+2\delta s + 1)$ term in $x$ is used to ensure that the left-hand side will have zero coefficients in its odd degree terms. Since $(s^2+2\delta s + 1)$ is stable, it does not affect stability of $x$. Equating coefficients and manipulating, we get the following equations.
\begin{gather*}
16\delta^4 -20\delta^2+5=0\\
A -4\delta^2+2=0\\
k -A=0\end{gather*}
Taking the largest real root of $16\delta^4 -20\delta^2+5$ gives $\delta = \sqrt{10+2\sqrt{5}}/4$. For this $\delta$ one can easily see that $A = 4\delta^2 - 2 \geq 0$, so $x$ is quasi-stable, as are $y$ and $z$ by design. Once again, we were able to easily achieve the limiting value discovered by Burke et al. \cite{burke2005analysis} discussed in Section \text{Re}f{low_degree_ex} by searching for quasi-admissible $\delta$. Combining this with Theorem \text{Re}f{main_thm}, we obtain the following theorem.
\begin{theorem}For $\deg(x) \leq 4$, $\delta = \frac{\sqrt{10+2\sqrt{5}}}{4}$ is quasi-admissible and all $\delta < \frac{\sqrt{10+2\sqrt{5}}}{4}$ are admissible.\end{theorem}
The examples above demonstrate how, by considering quasi-stable $x,y$ and $z$, we can find quasi-admissible $\delta$ that are limiting values of admissible $\delta$. Moreover, the quasi-stable $\delta$ above were found by solving relatively simple algebraic equations instead of having to perform optimization over the space of stable $x$ and $y$.
\section{Algebraic specification}\label{sec:alg_spec}
The observations in Section \text{Re}f{sec:motivation} and Section \text{Re}f{sec:math_back} and the examples in Section \text{Re}f{low_degree_ex} suggest the following approach which we refer to as {\it algebraic specification}. This method will be used to find the largest known values of $\delta$ found for any given degree. We wish to construct quasi-stable $x(s), y(s), z(s)$ with repeated roots on the imaginary line satisfying (\text{Re}f{bcp}). For example, we may wish to find polynomials of the following form:
\begin{gather*}
x(s) = (s^2+2\delta s+1)(s^2+A_1)^4(s^2+A_2)^2(s^2+A_3)^2(s^2+A_4)\\
y(s) = k(s^2+B_1)^3(s^2+B_2)^2\\
z(s) = s^{14}(s^2+C_1)^2(s^2+C_2)(s^2+C_3)\end{gather*}
We refer to such an arrangement of $x,y,z$ as an {\it algebraic configuration}. As long as $\delta > 0$, the parameters $\{A_i\}_{i=1}^4$, $\{B_i\}_{i=1}^2$, and $\{C_i\}_{i=1}^3$ are all nonnegative, and $k$ is real, $x(s), y(s), z(s)$ will be real, quasi-stable polynomials. We then wish to solve
\begin{equation}\label{alg_eq}
(s^2-2\delta s+1)x(s)+(s^2-1)y(s)=z(s)\end{equation}
Recall that the $(s^2+2\delta s+1)$ factor in $x(s)$ is present to ensure that the left-hand side has only even degree terms, as the right-hand side clearly only has even degree terms. Expanding (\text{Re}f{alg_eq}) and equating coefficients, we get 11 equations in 11 unknowns. Using PHCPack \cite{verschelde1999algorithm} to solve these equations and selecting the solution with the largest $\delta$ such that the $A_i, B_i, C_i \geq 0$, we get the following solution, rounded to seven decimal places:
\begin{gather*}
\delta = 0.9808348\\
A_1 = 1.1856917\\
A_2 = 6.6228807\\
A_3 = 0.3090555\\
A_4 = 0.2292503\\
B_1 = 0.5430391\\
B_2 = 0.2458118\\
C_1 = 4.4038385\\
C_2 = 0.7163490\\
C_3 = 7.4637156\\
k = 196.1845537
\end{gather*}
The actual solution has $\delta = 0.980834821202\ldots$. This is the largest $\delta$ we have found to date using this method. By Theorem \text{Re}f{main_thm}, we conclude the following theorem.
\begin{theorem}All $\delta \leq 0.9808348$ are admissible.\end{theorem}
In general, we can form an algebraic configuration for $x(s), y(s), z(s)$ as
\begin{equation}\label{xconf}
x(s) = (s^2+2\delta s +1) \prod_{i=1}^{m_1} (s^2+A_i)^{j_i}.\end{equation}
\begin{equation}\label{yconf}
y(s) = k\prod_{i=1}^{m_2}(s^2+B_i)^{k_i}.\end{equation}
\begin{equation}\label{zconf}
z(s) = s^c\prod_{i=1}^{m_3}(s^2+C_i)^{\ell_i}.\end{equation}
For fixed degrees of $x, y$, note there are only finitely many such configurations. Instead of performing optimization over the non-convex feasible region of the Belgian chocolate problem, we instead tackle the combinatorial optimization problem of maximizing $\delta$ among the possible configurations.
Note that $c$ in (\text{Re}f{zconf}) is whatever exponent is needed to make $\deg(z) = \deg(x)+2$. We want $x,y,z$ to satisfy (\text{Re}f{bcp}). Expanding and equating coefficients, we get equations in the undetermined variables above. As long as the number of unknown variables equals the number of equations, we can solve and look for real solutions with $\delta$ and all $A_i, B_i, C_i$ nonnegative.
Not all quasi-stable polynomials can be formed via algebraic specification. In particular, algebraic specification forces all the roots of $y,z$ and all but two of the roots of $x$ to lie on the imaginary axis. However, more general quasi-stable $x,y,z$ could have some roots with negative real part and some with zero real part. This makes the possible search space infinite and, as discussed in Section \text{Re}f{low_degree_ex}, empirically does not result in larger $\delta$. Further evidence for this statement will be given in Section \text{Re}f{sec:opt}.
While the method of algebraic specification has demonstrable effectiveness, it becomes computationally infeasible to solve these general equations for very large $n$. In particular, the space of possible algebraic configurations of $x,y,z$ grows almost exponentially with the degree of the polynomials. For large $n$, an exhaustive search over the space of possible configurations becomes infeasible, especially as the equations become more difficult to solve.
We will describe an algebraic configuration via the shorthand
\begin{equation}\label{alg_conf_shorthand}
[j_1,\ldots,j_{m_1}],[k_1,\ldots, k_{m_2}],[\ell_1,\ldots, \ell_{m_3}].\end{equation}
This represents the configuration described in $(\text{Re}f{xconf}),(\text{Re}f{yconf}),(\text{Re}f{zconf})$ above. In particular, if the second term of (\text{Re}f{alg_conf_shorthand}) is empty then $y = k$, while if the third term of (\text{Re}f{alg_conf_shorthand}) is empty then $z$ is a power of $s$. For example, the following configuration is given by $[3,1],[2],[1]$:
\begin{gather*}
x(s) = (s^2+2\delta s + 1)(s^2+A_1)^3(s^2+A_2)\\
y(s) = k(s^2+B_1)^2\\
z(s) = s^{10}(s^2+C_1)\end{gather*}
A table containing the largest quasi-admissible $\delta$ we have found and their associated algebraic configuration for given degrees of $x$ is given below. Note that for each entry of the table, given $\deg(x) = n$ and quasi-admissible $\delta$, Theorem \text{Re}f{main_thm} implies that all $\hat{\delta} < \delta$ are admissible with $x,y$ of degree at most $n$.
\begin{figure}
\caption{The largest known quasi-admissible $\delta$ for $x,y,z$ designed algebraically, for varying degrees of $x$.}
\end{figure}
| 3,951 | 15,323 |
en
|
train
|
0.170.3
|
\section{Approximating quasi-admissible $\delta$ by admissible $\delta$}\label{sec:approx}
In this section we will prove Theorem \text{Re}f{main_thm}. Our proof will be algorithmic in nature. We will describe an algorithm that, given $\delta$ that is quasi-admissible by quasi-stable polynomials $x, y$, will produce for any $\hat{\delta} < \delta$ stable polynomials $\hat{x}, \hat{y}$ admitting $\hat{\delta}$. Moreover, given $\deg(x) = n$, we will ensure that $\deg(\hat{x}) \leq n$.
\begin{proof}[of Theorem \text{Re}f{main_thm}]Suppose that for a given $\delta$ there are $x,y,z \in \overline{H}$ with $\deg(x) \geq \deg(y)$ satisfying (\text{Re}f{bcp}). Let $n = \deg(x)$. Define
$$R(s) := \dfrac{(s^2-1)y(s)}{z(s)}.$$
Note that for any $s \in \mathbb{C}$, $R(s) = 0$ iff $(s^2-1)y(s) = 0$, $R(s) = 1$ iff $(s^2-2\delta s+1)x(s) = 0$, and $R(s)$ is infinite iff $z(s) = 0$. Since $x,y,z$ are quasi-stable, we know that for $\text{Re}(s) > 0$, $R(s) = 1$ iff $s = \delta \pm i\sqrt{1-\delta^2}$ and $R(s) = 0$ iff $s = 1$. All other points where $R(s)$ is 0, 1, or infinite satisfy $\text{Re}(s) \leq 0$.
Precomposing $R(s)$ with the fractional linear transformation $f(s) = (1+s)/(1-s)$, we get the complex function
$$D(s) := R\bigg(\dfrac{1+s}{1-s}\bigg).$$
Note that this fractional linear transformation maps the unit disk $\{s | |s| = 1\}$ to the imaginary axis $\{ s | \text{Re}(s) = 0\}$. Also note that $f^{-1}(1) = 0, f^{-1}(\delta \pm i\sqrt{1-\delta^2}) = \pm it$ where $t = \sqrt{1-\delta}/\sqrt{1+\delta}$. Therefore, $D(s)$ satisfies the following properties:
\begin{enumerate}
\item For $|s| < 1$, $D(s) = 0$ iff $s = 0$.
\item For $|s| < 1$, $D(s) = 1$ iff $s = \pm it$.
\item $|D(s)| < \infty$ for $|s| < 1$.
\end{enumerate}
Note that the last holds by the quasi-stability of $z(s)$. Since $z(s) = 0$ implies $\text{Re}(s) \leq 0$, $D(s) = \infty$ implies $|s| \geq 1$. In particular, the roots of $x, y, z$ that have 0 real part now correspond to points $|s| = 1$ such that $D(s) = 1, 0, \infty$ respectively. For any $\epsilon > 0$, let
$$D_\epsilon(s) := D\bigg(\frac{s}{1+\epsilon}\bigg).$$
$D_\epsilon(s)$ then satisfies
\begin{enumerate}
\item For $|s| \leq 1$, $D_\epsilon(s) = 0$ iff $s = 0$.
\item For $|s| \leq 1$, $D_\epsilon(s) = 1$ iff $s = \pm i(1+\epsilon)t$.
\item $|D(s)| < \infty$ for $|s| \leq 1$.
\end{enumerate}
Precomposing with the inverse fractional linear transformation $f^{-1}(s) = (s-1)/(s+1)$, we get
$$R_\epsilon(s) := D_\epsilon\bigg(\dfrac{s-1}{s+1}\bigg).$$
By the properties of $D_\epsilon(s)$ above, we find that $R_\epsilon(s)$ satisfies
\begin{enumerate}
\item For $\text{Re}(s) \geq 0$, $R_\epsilon(s) = 0$ iff $s = 1$.
\item For $\text{Re}(s) \geq 0$, $R_\epsilon(s) = 1$ iff $s = \delta_\epsilon\pm i\sqrt{1-\delta_\epsilon^2}$ where
$$\delta_\epsilon = \dfrac{1-(1+\epsilon)^2t^2}{1+(1+\epsilon^2)t^2}.$$
\item For $\text{Re}(s) \geq 0$, $|R_\epsilon(s)| < \infty$.
\end{enumerate}
Moreover, $R_\epsilon(s) \neq 0, 1, \infty$ for any $s$ such that $\text{Re}(s) < 0$. We can rewrite $R_\epsilon(s)$ as $R_\epsilon(s) = p(s)/q(s)$.
Note that by the first property of $R_\epsilon$, the only root of $p(s)$ in $\{s | \text{Re}(s) \geq 0\}$ is at $s = 1$. By properties of $f(s), f^{-1}(s)$, one can show that $p(-1) = 0$. This follows from the fact that $R(-1) = 0$, which implies that $\lim_{s\to \infty} D(s) = \lim_{s\to\infty}D_{\epsilon}(s) = 0$, and therefore $R_\epsilon(-1) = 0$. Therefore, $p(s) = (s^2-1)y_\epsilon(s)$ where $y_\epsilon(s)$ has no roots in $\{s | \text{Re}(s) \geq 0\}$.
By the second property of $R_\epsilon$, the only roots of $q-p$ in $\{s | \text{Re}(s) \geq 0\}$ are at $\pm \delta_\epsilon + i\sqrt{1-\delta_\epsilon^2}$. Therefore, $q-p = (s^2-2\delta_\epsilon s+1)x_\epsilon(s)$ where $x_\epsilon(s)$ has no roots in $\{s | \text{Re}(s) \geq 0\}$.
Finally, by the third property of $R_\epsilon$ we find that $z_\epsilon(s) = (s^2-2\delta_\epsilon s+1)x_\epsilon(s)+(s^2-1)y_\epsilon(s)$ is stable. Moreover, basic properties of fractional linear transformations show that if $\deg (x) = n \geq \deg(y) = m$, then $x_\epsilon, y_\epsilon$ are both of degree $n$. Therefore, $x_\epsilon, y_\epsilon, z_\epsilon$ are stable polynomials satisfying (\text{Re}f{bcp}) for $\delta_\epsilon$. For any $\hat{\delta} < \delta$, we can take $\epsilon$ such that $\delta_\epsilon = \hat{\delta}$, proving the desired result.\end{proof}
Note that if we start with $\delta$ admissible by stable $x,y,z$ of degree at most $n$, then we can do the reverse of this procedure to perturb $x,y,z$ to quasi-stable $\hat{x}, \hat{y}, \hat{z}$. By the reverse of the arguments above, $\hat{x}, \hat{y}, \hat{z}$ will be quasi-stable but at least one of these polynomials will not be stable. These polynomials will be associated to some quasi-admissible $\hat{\delta} > \delta$. This gives the proof of Theorem \text{Re}f{rev_thm}.
The proof above describes the following algorithm for perturbing quasi-stable $x,y,z$ satisfying (\text{Re}f{bcp}) to obtain stable $\hat{x}, \hat{y},\hat{z}$ satisfying (\text{Re}f{bcp}).\\
\\
\noindent{\bf Input:} Real numbers $\delta, \epsilon > 0$ and real polynomials $x,y,z \in \overline{H}$ satisfying (\text{Re}f{bcp}).\\
\noindent{\bf Output:} $\hat{\delta}$ and real polynomials $\hat{x},\hat{y},\hat{z} \in H$ satisfying (\text{Re}f{bcp}).
\begin{enumerate}
\item Let $R(s) = (s^2-1)y(s)/z(s)$. For $\epsilon > 0$, compute
$$R_\epsilon(s) = R\bigg(\dfrac{(2+\epsilon)s + \epsilon}{\epsilon s + (2+\epsilon)} \bigg).$$
\item Reduce $R_\epsilon(s)$ to lowest terms. Suppose that in lowest terms $R_\epsilon(s) = p(s)/q(s)$.
\item Factor $p(s)$ as $(s^2-1)\hat{y}(s)$ and factor $q(s)-p(s)$ as $(s^2-2\hat{\delta}s+1)\hat{x}(s)$. Let $\hat{z}(s) = q(s)$.
\end{enumerate}
To further illustrate the method of algebraic specification and this algorithm for perturbing to get quasi-stable polynomials, we give the following detailed example.
\begin{example}Say we are interested in $x$ of degree 4. We may then give the following algebraic specification of $x, y, z$ discussed in Section \text{Re}f{low_degree_ex}. In the shorthand of (\text{Re}f{alg_conf_shorthand}), this is the configuration $[1],[],[]$.
\begin{gather*}
x(s) = (s^2+2\delta s+1)(s^2+A)\\
y(s) = k\\
z(s) = s^6\end{gather*}
As in Section \text{Re}f{low_degree_ex}, we solve $(s^2-2\delta s+1)x(s) + (s^2-1)y(s) = z(s)$. This implies that $\delta, A, k$ satisfy $16\delta^4-20\delta^2+5 = 0$, $A = 4\delta^2-2$, $k = 4\delta^2 - 2$. Taking the largest root of $16\delta^4-20\delta^2+5$ gives $\delta = \sqrt{10+2\sqrt{5}}/4$, $A = k = (\sqrt{5}+1)/2$. Given numerically to six decimal places, $\delta = 0.951057$. Computing $R(s)$ using exact arithmetic, we get
\begin{gather*}
R(s) = \dfrac{(s^2-1)y(s)}{z(s)} = \dfrac{(s^2-1)(\sqrt{5}+1)}{2s^6}\end{gather*}
We then use a fractional linear transformation $s \mapsto (1+s)/(1-s)$ to get:
\begin{align*}
D(s) &= R((1+s)/(1-s))\\
&= \dfrac{2s(\sqrt{5}+1)(s-1)^4}{s^6+6s^5+15s^4+20s^3+15s^2+6s+1}\end{align*}
One can verify that $D(s)$ can equal 1 on the boundary of the unit circle, so we push these away from the boundary (with $\epsilon = 0.01$) by defining
\begin{align*}
D_\epsilon(s) &=D\big(\frac{s}{1+0.01}\big)\\
&=\dfrac{6.40805(0.99010s-1)^4s}{0.942045s^6+\ldots+5.94054s}\end{align*}
While we gave an approximate decimal form above for brevity, this computation can and should be done with exact arithmetic. We let $R_\epsilon(s) = f_\epsilon((s-1)/(s+1))$. Writing $R_\epsilon(s)$ as $p(s)/q(s)$ in lowest terms, we get:
\begin{gather*}
p(s) = 64080.55401(0.990990s+199.00990)^4(s^2-1)\\
q(s) = 0.62122\times 10^{14}s^6 + \ldots +0.94204\end{gather*}
As proved above, $p(s)$ will equal $(s^2-1)\hat{y}(s)$. Dividing $p(s)$ by the $s^2-1$ factor, we get a polynomial $\hat{y}(s)$ such that its only root is at $s = -201$. Therefore $\hat{y}(s)$ is stable. The denominator, $\hat{z}(s)$ is easily verified to only have roots with negative part. Finally, the polynomial $q(s) - p(s)$ will equal $(s^2-2\hat{\delta}s+1)\hat{x}(s)$. Finding its roots, one can show that $q(s)-p(s)$ only has roots with negative real part, except for roots at $s = 0.950097 \pm 0.311954i$. These roots are of the form $\hat{\delta} \pm \sqrt{\hat{\delta}^2-1}$ for $\hat{\delta} = 0.950097$. Therefore $\hat{\delta} = 0.950097$ is admissible via the stable polynomials $\hat{x},\hat{y},\hat{z}$. While we have decreased $\delta$ slightly, we have achieved stability in the process. By decreasing $\epsilon$, we can get arbitrarily close to our original $\delta$.
\end{example}
| 3,485 | 15,323 |
en
|
train
|
0.170.4
|
\section{Optimality of algebraic specification}\label{sec:opt}
Not only does our method of algebraic specification find larger $\delta$ than have been found before, one can view previous approaches to the Belgian chocolate problem as approximating algebraic specification. In particular, previously discovered admissible $\delta$ can be seen as approximating some quasi-admissible $\delta'$ that can be found via algebraic specification.
For example, in \cite{chang2007global}, Chang and Sahinidis found that $\delta = 0.9739744$ is admissible by
\begin{align*}
x(s) &=s^{10} + 1.97351109136261s^9\\
&+5.49402092964662s^8 + 8.78344232801755s^7\\
&+ 11.67256448604672s^6 + 13.95449016040116s^5\\
&+11.89912895529042s^4 + 9.19112429409894s^3\\
&+5.75248874640322s^2+2.03055901420484s\\
&+1.03326203778346,\\
y(s)&=0.00066128189295s^5+3.611364710425s^4\\
&+0.03394722108511s^3+3.86358782861648s^2\\
&+0.0178174691792s+1.03326203778319.\\
\end{align*}
The roots of $x,y,z$ were discussed in Section \text{Re}f{motivation}. As previously noted, $x,y,z$ are close to polynomials with repeated roots on the imaginary axis. Examining the roots of $x,y,z$, one can see that $x,y,z$ are tending towards quasi-stable polynomials $x', y', z'$ that have the same root structure as the algebraic configuration $[3,1],[2],[1]$. In other words, we will consider the following quasi-stable polynomials:
\begin{gather*}
x'(s) = (s^2+2\delta' s + 1)(s^2+A_1)^3(s^2+A_2)\\
y'(s) = k(s^2+B)^2\\
z'(s) = s^{10}(s^2+C)
\end{gather*}
Solving for the free parameters and finding the largest real $\delta'$ such that $A_1, A_2, B, C \geq 0$, we obtain the following values, given to seven decimal places.
\begin{gather*}
\delta' = 0.9744993\\
A_1 = 1.3010813\\
A_2 = 0.4475424\\
B = 0.5345301\\
C = 2.5521908\\
k = 3.4498736.\end{gather*}
One can easily verify that taking these values of the parameters, the roots of $x, y, z$ are close to the roots of $x',y',z'$. These algebraically designed $x', y', z'$ possess the root structure that $x,y,z$ are tending towards. Moreover, the $x', y', z'$ show that $\delta'$ is quasi-stable and their associated $\delta'$ gives an upper bound for the $\delta$ found by Chang and Sahinidis. This demonstrates that the stable polynomials found by Chang and Sahinidis are tending towards the quasi-stable ones listed above. Moreover, by Theorem \text{Re}f{main_thm} all $\delta < 0.9744993$ are admissible.
In fact, many examples of admissible $\delta$ given in previous work are approximating quasi-admissible $\delta$ found via algebraic specification. This includes the previously mentioned examples in \cite{burke2005analysis} and all admissible values of $\delta$ given by Chang and Sahinidis in \cite{chang2007global}. We further conjecture that for all admissible $\delta$, there is a quasi-admissible $\delta' > \delta$ that can be achieved by algebraically specified $x,y,z$.
More formally, if we fix $x, y$ to be of degree at most $n$, let $\delta_n^*$ denote the supremum of the optimization problem in (\text{Re}f{bcp_opt}). Note that as discussed in Section \text{Re}f{delta_theory}, $\delta_n^*$ is not admissible by $x,y$ of degree at most $n$. The empirical evidence given in this section and in Sections \text{Re}f{sec:motivation} and \text{Re}f{low_degree_ex} suggests that this $\delta_n^*$ is quasi-admissible and can be obtained through algebraic specification. This leads to the following conjecture.
\begin{conjecture}For all $n$, $\delta_n^*$ is quasi-admissible by some $x,y,z$ that are formed via algebraic specification.\end{conjecture}
\section{Conclusion}
The Belgian chocolate problem has remained resilient to direct global optimization techniques for over a decade. Most prior work attempts to maximize $\delta$ subject to the stability constraints by applying iterative methods to complicated non-convex regions. By contrast, we find the largest known value of $\delta$ in a more direct fashion. We do this by reducing our problem to combinatorial optimization over a finite set of algebraically constructed limit points. Our key algebraic insight is that quasi-admissible $\delta$ are limiting values of the admissible $\delta$. In fact, previous methods actually find admissible $\delta$ that approach quasi-admissible $\delta$. We give the method of algebraic specification to design quasi-stable polynomials and directly find these quasi-admissible $\delta$ by solving a system of equations. We then show that we can perturb these quasi-stable polynomials to obtain stable polynomials with admissible $\delta$ that are arbitrarily close to the quasi-admissible $\delta$. We show that this method recovers the largest admissible $\delta$ known to date and gives a much better understanding of the underlying landscape of admissible and quasi-admissible $\delta$. We conjecture that for all $n$, the supremum of all $\delta$ admissible by $x,y$ of degree at most $n$ is a quasi-admissible $\delta$ that can be found through our method of algebraic specification.
\section*{Acknowledgments}
The authors would like to thank Bob Barmish for his valuable feedback, discussions, and advice. The first author was partially supported by the National Science Foundation grant DMS-1502553. The second author was partially supported by the Simons Foundation grant MSN179747.
\end{document}
| 1,834 | 15,323 |
en
|
train
|
0.171.0
|
\begin{document}
\title{Sensor-assisted fault mitigation in quantum computation}
\author{John L.\ Orrell} \email[Corresponding author: ]{[email protected]}
\affiliation{Pacific Northwest National Laboratory, Richland, WA 99352, USA}
\author{Ben Loer}
\affiliation{Pacific Northwest National Laboratory, Richland, WA 99352, USA}
\date{\today}
\begin{abstract}
We propose a method to assist fault mitigation in quantum computation through the use of sensors co-located near physical qubits. Specifically, we consider using transition edge sensors co-located on silicon substrates hosting superconducting qubits to monitor for energy injection from ionizing radiation, which has been demonstrated to increase decoherence in transmon qubits. We generalize from these two physical device concepts and explore the potential advantages of co-located sensors to assist fault mitigation in quantum computation. In the simplest scheme, co-located sensors beneficially assist rejection of calculations potentially affected by environmental disturbances. Investigating the potential computational advantage further required development of an extension to the standard formulation of quantum error correction. In a specific case of the standard three-qubit, bit-flip quantum error correction code, we show that given a 20\% overall error probability per qubit, approximately 90\% of repeated calculation attempts are correctable. However, when \emph{sensor-detectable} errors account for 45\% of overall error probability, the use of co-located sensors uniquely associated with independent qubits boosts the fraction of correct final-state calculations to 96\%, at the cost of rejecting 7\% of repeated calculation attempts.
\end{abstract}
\maketitle
\section{\label{sec:intro}Introduction}
Many mechanisms may lead to state decoherence in the physical implementation of quantum computing systems. Recent reports \cite{PhysRevLett.121.117001,Oliver2020,Cardani2020} show deleterious effects in superconducting kinetic inductance devices and superconducting transmon qubits correlated with ionizing radiation levels, identifying yet another mechanism causing decoherence. As others have~\cite{Cardani2019}, we postulate these observed phenomena stem from the same underlying process: the instantaneous injection of energy into the superconducting device and the device's substrate as a result of impinging ionizing radiation. It is possible to reduce the rate of ionizing radiation energy injections by shielding against naturally occurring radiation sources in the laboratory and by placing systems underground to shield against cosmic rays. These techniques are commonly employed for rare event searches in nuclear and particle physics research, including searches operating at mK temperatures~\cite{PhysRevD.95.082002,Armengaud2017,PhysRevD.100.102002,ISI:000386879300001,ALDUINO20199,ISI:000475616600001}. However, the history of such physics research experiments demonstrate it is difficult to entirely shield against the ionizing radiation present in any instrumentation laboratory. Thus, we contemplate superconducting qubit operation in a regime of low, but non-zero, rates of ionizing-radiation-induced energy injections. From there we draw an inference to a superconducting qubit device concept employing the use of co-located sensors that can signal when an ionizing radiation energy injection has occurred, signifying probable error in the quantum computation.
We employ the terminology \emph{fault mitigation in quantum computation} to distinguish from purely \emph{quantum} computational means for achieving fault tolerance or error correction~\cite{7167244}. In the simplest application of our device concepts, we show co-located sensors can provide modest fault mitigation through selective result-acceptance in redundant (``many shot'') computation schemes, where the same quantum calculation is repeated multiple times. Speculatively, as this will require advances in superconducting qubit interconnection techniques, we explore how co-located sensors can identify uncorrectable errors within the framework of quantum error correction codes.
\section{\label{sec:TES-assisted_qubit_concept}TES-assisted qubit device concept}
\begin{figure*}
\caption{Photograph of a CDMS II ZIP detector contained within it's hexagonal copper housing~\cite{CDMS-iZIP-photo}
\label{fig:cdms-ii-zip}
\end{figure*}
This section presents a notional concept for the physical implementation of devices combining ionizing radiation transition edge sensors (TES) and superconducting qubits that share a common silicon substrate.
\subsection{\label{sec:TES}Transition edge sensor devices}
In a TES~\cite{ISI:000231009400003}, the material's effective temperature is set such that the material resides on the ``transition edge'' between the superconducting and normal conducting states. Any additional energy added to the material will increase the temperature and push the TES toward the normal conducting phase, dramatically raising the electrical resistance of the material. Sensing this change in resistance in a circuit makes the TES useful for detecting small amounts of absorbed energy.
A key step in the development~\cite{doi:10.1063/1.1770037,Irwin2005} of TES devices as practical sensors was the use of direct current (DC) voltage bias to provide negative electrothermal feedback (ETF) to stabilize the readout circuit~\cite{doi:10.1063/1.113674}. As diagrammed in the cited seminal reference, superconducting quantum interference devices (SQUIDs) are typically used to monitor the resistance-dependent current in the ETF TES circuit through a current-induced magnetic field. While the TES may reside at tens of mK~temperatures in the ``mixing chamber'' stage of a refrigerator, the SQUIDs monitoring the circuit are typically located at a warmer stage, often at a $\simeq600$~mK ``still'' stage~\cite{AKERIB2008476}. This provides for physical separation and magnetic shielding between the TES devices and the SQUIDs.
\begin{figure*}
\caption{Micrograph of IBM 5-qubit device.}
\label{fig:ibmqx2_yorktown_microgrpah}
\caption{IBM 5-qubit scheme with 3 QETs.}
\label{fig:ibmqx2_yorktown_schematic}
\caption{Gate connections with sensor patch.}
\label{fig:ibmqx2_yorktown_connections}
\caption{Labeled micrograph of the IBM 5-qubit \texttt{ibmqx2}
\label{fig:ibmqx2_yorktown}
\end{figure*}
TES sensors developed by the SuperCDMS collaboration~\cite{PhysRevD.95.082002} employ QET devices, defined as Quasiparticle-trap-assisted Electrothermal feedback Transition-edge sensor devices~\cite{doi:10.1063/1.1146105}. In these QET devices, superconducting aluminum films are deposited on Ge or Si crystals, in contact with the tungsten-based ETF TES devices. Phonon energy present in the crystal substrate breaks Cooper pairs in the superconducting Al films. The resultant quasiparticles diffuse through the Al film to the W-based ETF TES, ultimately resulting in a TES transition event used for event detection. Typically, multiple QET devices are operated in parallel in a circuit to provide increased phonon energy collection coverage with a single sensor channel.
Figure~\ref{fig:cdms-ii-zip} shows a CDMS ZIP (Z-dependent Ionization- and Phonon-mediated) detector~\cite{CDMS-iZIP-photo}. We added the schematics~\cite{PhysRevD.72.052009}, scale overlays, and highlighting lines. Detailed descriptions of lithographic fabrication techniques for similar devices are available~\cite{JASTRAM201514}. It is worth noting the QET devices used in these detector applications are essentially ``classical'' signal sensors. That is, the TES circuit operates through a process of Joule heating of a material in response to a thermalizing population of quasiparticles, produced by a population of thermal and athermal substrate phonons.
\subsection{\label{sec:superconducting_qubit}Superconducting qubit devices}
There are many modalities for the physical implementation of qubits.
These modalities include trapped ions, superconducting circuits, photon systems manipulated either with linear optics or quantum dots, neutral atoms, semiconductor devices typified by either optically active nitrogen vacancies in diamond or electronically manipulated electron spins, and most recently topological qubits that are based in collective properties of solid state systems~\cite{NAP25196}. In all cases, the goal is to isolate a physical two-level quantum system that can be manipulated for quantum computation. In this report we focus on superconducting qubit devices.
In this work we consider transmon qubits~\cite{PhysRevA.76.042319} based on our experience with them in studies of the effect of ionizing radiation on their coherence time~\cite{Oliver2020}. Furthermore, the IBM Q Experience~\cite{IBMQ} provides access to transmon-based multi-qubit devices~\cite{ISI:000399429500002,ISI:000542630400002} for cloud-based quantum computing. We use these resources as a reference for exploring sensor-assisted fault mitigation in quantum computation.
Figure~\ref{fig:ibmqx2_yorktown_schematic} shows in schematic representation the combining of three QET devices with the qubit chip layout. The schematic diagram (Fig.~\ref{fig:ibmqx2_yorktown_schematic}) captures our proposed hybrid sensor and qubit device concept. A notional connectivity diagram (Fig.~\ref{fig:ibmqx2_yorktown_connections}) further abstracts the generalized idea of a co-located sensor for detection of environmental disturbances.
\subsection{\label{sec:TES-assisted_qubit_devices}TES-assisted qubit devices}
A hybrid device as suggested by Figure~\ref{fig:ibmqx2_yorktown_schematic} is producible with today's fabrication techniques. Furthermore, we do not foresee any inherent incompatibility in co-operation of the DC voltage biased QET devices and the microwave frequency controls of the qubits. Specifically, QET devices on a silicon chip are operated using a DC voltage bias across the TES of approximately 40~mV. From the TES-SQUID circuit's quiescent state, ionizing radiation induced events appear as $\simeq$5~$\mu$s rising-edge current excursions of $\simeq$100~nA amplitudes and $\simeq$100~$\mu$s pulse decay times. These representative operational details are derived from the SuperCDMS HVeV chip-scale dark matter search devices~\cite{PhysRevLett.121.051301,doi:10.1063/1.5010699}.
The above described QET operating characteristics are in contrast to transmon qubit operation following the theory of circuit quantum electrodynamics (cQED)~\cite{Schuster2007}. Qubits are typically controlled via radiofrequency pulses applied through co-planar waveguide microwave transmission lines, typically in the $\simeq$5~GHz range. Specifically, qubits are coupled to the transmission line via superconducting Al circuit meander resonators designed to have unique resonance frequencies in the same $\simeq$5~GHz range, resulting from the details of their physical shape. Each qubit's resonance frequency is designed to lie off-resonance (detuned) from the paired resonator's resonance frequency to allow dispersive readout from the qubit via the resonator~\cite{doi:10.1063/1.5089550}. Multiple such qubit-resonator pairs can exist on the same silicon chip and even connect to same transmission line~\cite{Jerger_2011}, so long as all resonance frequencies are fully offset. The $\simeq$5~GHz RF control pulses are typically $\simeq$10s of nanoseconds duration and have millivolt scale amplitudes at the readout resonator, resulting in $\simeq$100s of nanoamperes of current in the qubit circuit.
The hybrid devices we envision, having the above described characteristics, would consist of QET and transmon qubit devices simultaneously operated at roughly 30--50~mK. There are two obvious possible ``cross-talk'' scenarios between the QETs and the quibts. The first is through near-resonance coupling of RF qubit control pulses in the QET. We believe the QET physical layout can be optimized to reduce the potential for this coupling. It is not obvious current excursions in the QET devices would have any coupling to the qubit circuits. The second ``cross-talk'' mechanism is through quasiparticle generation via power input from either device type. There is ample evidence from the operation of arrays of QET and superconducting qubit devices that each device type can be operated without substantial injection of thermal energy into the substrate, which would result in elevated quasiparticle levels in the superconducting circuits of either device type. We are not aware of any conceptually similar device created to date to that experimentally tests the veracity of these claims.
In the next section, we assess the potential value of such a co-located sensor in contributing to fault mitigation in quantum computations. The initial evaluation considers plausible devices we believe can be fabricated today. Such devices would likely employ co-located sensors in a ``veto'' role to reject computations suspected of excessive error-inducing environmental disturbances. Taking the assessment a step further, we speculate on the error correction performance of independent qubits systems, where each qubit is uniquely associated with an individual co-located sensor. In the case of superconducting qubits, this idealization would manifest in the case where QET-qubit pairs each reside on separate silicon substrate chips and are potentially interconnected through superconducting air-bridges or capacitive coupling across gaps between chips. We note the choice of the class of TES/QET devices~\cite{Ullom_2015} for the co-located sensor is potentially interchangeable with microwave kinetic inductance detectors (MKIDs)~\cite{Day_2003} or superconducting nanowire detectors~\cite{Natarajan_2012}.
| 3,544 | 42,051 |
en
|
train
|
0.171.1
|
\section{\label{sec:error_estimation}Quantum error mitigation}
Pedagogical development of qubit-based quantum error correction considers two complementary forms of error: bit-flip error and sign-flip error. Within the Bloch sphere picture of a qubit, these errors correspond to state error and phase error. These two flip-type quantum errors are highly idealized \emph{binary symmetric channel} representations of the otherwise continuous error experienced by real qubits~\cite{Devitt_2013}. We note ionizing radiation induced error in superconducting transmon qubits is almost certainly a continuous noise source best represented by arbitrary three-angle unitary transformations (or much worse). However, for our goal of developing an intuition for the relative utility of sensor-assisted error mitigation in quantum computation, we will focus solely on bit-flip errors, to the exclusion of all others. This assumption and other assumptions we make in the following developments are assessed in the Discussion section.
Our goal is to determine how information gained from a co-located sensor---\emph{without performing any measurement on the quantum computation qubit(s)}---can assist in the implementation of error mitigation in quantum computation. We begin with the hybrid device concept presented in Section~\ref{sec:TES-assisted_qubit_concept}, Figure~\ref{fig:ibmqx2_yorktown_schematic}. For illustrative purposes, we make use of the IBM Quantum Experience~\cite{ibmqx2_yorktown} as a source of some realistic scenarios, specifically working with the Yorktown (\texttt{ibmqx2}) 5-qubit backend~\cite{PhysRevLett.109.240504}. We will refer to this simply as the ``Yorktown backend'' for brevity. We conclude by investigating a fully abstracted hypothetical case when co-located sensors are uniquely assigned to individual, independent qubits.
\begin{figure}
\caption{A simple balanced Deutsch-Jozsa calculation used as a test case for investigating the role of co-located sensors in calculations performed by devices such as the IBM 5-qubit \texttt{ibmqx2}
\label{fig:balanced-dj-circuit}
\end{figure}
\begin{figure}
\caption{Results from three implementations of a balanced Deutsch-Jozsa calculation (See Fig.~\ref{fig:balanced-dj-circuit}
\label{fig:balanced-dj}
\end{figure}.
\subsection{\label{sec:example_error}Example calculation: Repetition and error}
Quantum error correction is often presented as an approach toward the correction of errors in an idealized, \emph{single-pass} quantum computation calculation. The application of quantum computation routinely uses computational repetition (repeating the same calculation many times) to achieve averaged results that approach the idealized, single-pass calculation result for large numbers of repetitions. Furthermore, a single-pass quantum calculation is only able to return the ``correct'' answer in cases where the result is uniquely identifiable with a single eigenvector of the measurement basis. More generally, in cases analogous to quantum phase estimation and/or quantum state tomography, the relative weight of the measurement basis eigenvectors---determined through computational repetition---is key to determining the underlying quantum state. Thus, in quantum calculations, computational repetition is used advantageously in \emph{both} statistical averaging for error mitigation \emph{and} quantum state estimation as part of the underlying calculation method. In addition, and in entire generality, if erroneous final states are identifiable within this repetition process, then either better accuracy is obtained for a fixed number of repetitions or the same accuracy is achievable with fewer repetitions.
Figure~\ref{fig:balanced-dj}(a) shows the results from 81,920~repetitions of a simple balanced Deutsch-Jozsa calculation (see Fig.~\ref{fig:balanced-dj-circuit}) implemented on the Yorktown backend. The ``correct'' result is equal weight in each of the four states $|001|$, $|011|$, $|101|$, and $|111|$ (i.e., 25\% of the 81,920~trials in each of the four states), with statistical fluctuations from the finite sample size. However, the data report \emph{at minimum} 5,734 trials of the repeated quantum calculation were in error, reporting measurements of the states $|000|$, $|010|$, $|100|$, and $|110|$.
We contemplate the possibility that \emph{some} of the error states are the result of ionizing radiation striking the Yorktown backend during the computational repetitions. Our prior work~\cite{Oliver2020} suggests the actual fraction of ionizing radiation disturbances is small for devices such as the Yorktown backend. However, for the sake of intellectual exploration, we wish to consider when some significant percentage of the induced error states are due to ionizing radiation or some other environmental disturbance detectable by a co-located sensor. We are thus implicitly assuming some error inducing phenomenon are also \emph{not} detectable by the co-located sensor, as is normally assumed in quantum error correction schemes. For concreteness, we consider a case where 60\% of the errors are \underline{\emph{not}} due to ionizing radiation (or some other environmental disturbance), which is detectable by a co-located sensor on the qubit chip.
We have no method for assessing the true error cases for any particular computational repetition of the Yorktown backend, so we must create a model of the noise. The Qiskit programming language provides a mechanism for simulating the noise of a specific backend device, based on measured gate error rates and coherence times. Fig~\ref{fig:balanced-dj} shows the results of many such calculations performed during the week of 12 October 2020. Unfortunately, we are not aware of a way to use the Qiskit modeled noise to determine for a single repetition of the calculation when an error may have been induced (modeled) for a qubit. Thus, we created a simple bit-flip-based noise model simulation designed to \emph{mimic} the statistical properties of the Yorktown backend performing the balanced Deutsch-Jozsa calculation.
We assign a single bit-flip (\textbf{\texttt{X}}-gate) to follow each of the eleven operations on the qubits in the circuit diagram of Figure~\ref{fig:balanced-dj-circuit}, including the control qubit on \textit{qubit}$_1$. We find setting the bit-flip error probability to 7\% in this highly over-simplified model simulation roughly reproduces the balanced Deutsch-Jozsa calculation's statistical distribution of results seen on the actual Yorktown backend device. Thus, we now have a method for determining within a single repetition of the calculation when an error was induced within the quantum circuit by any one (or more) of the bit-flip errors. We simulated 81,920 single-shot calculations, where each time a balanced Deutsch-Jozsa circuit was created with a randomly generated set of bit-flip errors contained within the circuit, based upon the 7\% gate error probability mentioned above. The results are shown in Fig~\ref{fig:balanced-dj}(c). Recall, rather than assuming 100\% of the induced errors are due to an environmental disturbance that can be detected by the co-located sensor, we instead assume 60\% of the errors are \underline{\emph{not}} detectable by the co-located sensor.
Fig~\ref{fig:balanced-dj}(d) shows the results when the co-located sensor would provide information to reject a number of the calculations (20,282 shots in this case) that are expected to potentially be in error. This improves the performance of the quantum calculation, showing a reduction of the fraction of calculation repetitions reporting states $|000|$, $|010|$, $|100|$, and $|110|$ compared to that shown in Fig~\ref{fig:balanced-dj}(c). Our first substantial conclusion is that this improvement is at the cost of rejecting outright a number of the calculations from consideration. We repeat, the calculation improves because those calculations with the potential for being environmentally disturbed are preferentially rejected from consideration in calculating the final results after all repetitions are complete. The Appendix to this report further investigates the statistical properties of the results shown in Fig~\ref{fig:balanced-dj}.
The form of error mitigation described above is of the simplest variety. The co-located sensor provides a case-by-case capacity to reject or ``veto'' individual, ``single-shot'' calculations. At the expense of throwing-away the so-flagged calculation trials, it is possible to improve the numerical accuracy of quantum calculations employing repetition for purposes of result averaging or quantum state determination via the measurement eigenvector weightings. While these improvements are modest, we believe devices such as that described by Figure~\ref{fig:ibmqx2_yorktown_schematic} can be fabricated today and take advantage of sensors to selectively reject calculations where environmental phenomenon have potentially disturbed the quantum computational system.
\subsection{\label{sec:two_error_types}Error types: Environmental and entangling}
We now propose to distinguish more clearly between two classes of phenomenon resulting in quantum decoherence of qubit systems. In this discussion, we have in mind superconducting qubit devices, but we believe these definitions are sufficiently general as to apply to other physical implementations of qubits. We suggest framing two types of qubit error generation mechanisms that can appear in physical qubit systems: (1) Environmental disturbances and (2) effects having quantum entanglement. These two types are not mutually exclusive, but they should be exhaustive. As such, we warily adopt substantively \emph{different} meaning for the terms ``environment'' and ``environmental,'' due to a lack of better terminology. We acknowledge our use of these terms may seem counter to the sense used by other authors.
For this report we consider environmental error-inducing disturbances as those phenomena that are \emph{independent} of the presence or absence of a qubit state. In a superconducting qubit device, we have in mind phenomena such as energy injection from ionizing radiation, leakage of UV, optical, or IR photons into the system, thermal heat transients, fluctuating externally-generated magnetic fields, and fluctuating externally-generated electric fields (e.g., RF). In these cases, the phenomena impinges on the qubit system \emph{and the immediate vicinity}, independent of the presence or absence of a qubit holding a quantum state. In these cases, we propose an appropriate sensor can potentially detect the error-inducing environmental disturbance without \emph{any} explicit or implicit influence on the state of a qubit in the vicinity of the disturbance. We henceforth refer to these error-inducing disturbances as ``environment''- or ``environmental''-inducing error sources. These errors are entirely incoherent errors within a computation.
A second class of error-inducing effects must also exist. This second class distinguishes itself through the quantum state entanglement produced as a result of the interaction between the error-inducing phenomenon and the presence of a qubit state. In a superconducting qubit device, we have in mind phenomena such as coupling to two-level state (TLS) systems and off-resonance coupling to other device elements. In these cases, a co-located measurement of the entangled error-inducing effect has the potential to produce back-action on the qubit's quantum state. Thus, we refer to these types of errors as ``entangling'' error sources. These ``entangling'' error-types can result in both incoherent and coherent error within computations.
We expect both types of errors described above are present in physical implementations of quantum computing systems. Throughout this study we have always assumed the entangling error is 60\% of the overall error probability.\footnote{Assuming 100\% of errors are of the entangling type is equivalent to the typical, pedagogical assumption in quantum error correction. Assuming 0\% of the errors are of the entangling type means \emph{all} errors are potentially identifiable by a co-located sensor, which we consider an unlikely and uninteresting, limiting case.}
\begin{figure*}
\caption{\label{fig:QC-S111-E111-SingleCircuit}
\label{fig:QC-S111-E111-SingleCircuit}
\end{figure*}
| 2,899 | 42,051 |
en
|
train
|
0.171.2
|
\subsection{\label{sec:middle_case}Sensor-assist in quantum error correction}
We now evaluate a more speculative scenario abstracted and generalized from the preceding sections. We assume all qubits experience \emph{entirely} independent errors and a co-located sensor is associated with each qubit. Furthermore, we assume a typical set of quantum computational gates is available and all errors in the error channel are bit-flip errors. Furthermore, we make the assumptions that circuit gates do not introduce errors outside of the error channel and ancilla qubits are reliable for their purpose of extracting a syndrome measurement. A number of such assumptions are made through-out the following development and these assumptions are explored in the Discussion (Sec.~\ref{sec:discussion}).
Figure~\ref{fig:QC-S111-E111-SingleCircuit} shows a quantum circuit for performing error correction when the error channel (columns 5~\&~6) is composed of independent environmental- and entangling-error types, as described above. In describing this quantum circuit, we focus on the key differences from a standard three-qubit, bit-flip error correction code. Columns~1-4 initialize three qubits, set a quantum state $|\Psi\rangle$ to preserve, and then encode the quantum state in the expanded three-qubit computational basis space. Column~5 includes a single bit-flip error (pink ``\textbf{\texttt{X}}?''-gates) on each of the three computational qubits, representing the potential environmental disturbance that can be detected by a co-located sensor. Column~6 represents the possibility to have entangling-type errors on any of the three qubits, shown as purple ``\textbf{\texttt{X}}?''-gates. Columns~7--9 represent the three co-located sensor readouts that are uniquely identified with each of the three physical qubits used for the state preservation.\footnote{Co-located sensors might also be associated with the ancilla qubits for further protection.} Note the diagram suggests the co-located sensors are near, but do not interact with the qubits. Pulses measured by the co-located sensors are recorded in the sensor's classical bit register, along the bottom of the diagram.
As the error correction portion of the circuit (columns 10--18) can only correct a single qubit error, at this point it is already possible to reject a single shot of the calculation if the co-located sensors measure two or more potential environmental disturbances to the qubits. When the sensor classical register reports 0x3, 0x5, 0x6, or 0x7, the Sensor REJECT flag is set for vetoing the calculation's output, as shown in the quantum circuit at column~10. Only in cases where a single (or no) co-located sensor has an event does the quantum computation fruitfully proceed to the error correction stage in columns~10--18. Assuming the calculation proceeds into the error correction stage in columns~10--18, the preserved quantum state is then decoded and measured in columns~19--24.
To understand the impact of the co-located sensor capacity to detect potential error-inducing environmental disturbances, we must evaluate the truth table of the circuit. There are eight possible combinations of errors for each of the environmental- and entangling-type errors (columns 5 and 6, respectively) on the three computational qubits, resulting in sixty-four possible error cases for the complete truth table (i.e., $2^3\times2^3=64$ error combinations). Note we are not yet invoking the assumption that the single error probability is ``small,'' though we will invariably evaluate specific cases under that assumption.
To compare the sensor-assisted circuit shown in Figure~\ref{fig:QC-S111-E111-SingleCircuit} to the standard, three-qubit, bit-flip error, quantum error correction code, recognize removal of columns 5, 7, 8, and 9 produces the standard three-qubit, bit-flip error correction circuit. Thus, we can tabulate the truth table for both circuits together for direct comparison. As stated above, there are 64 possible error combinations. The full 64 element truth table is provided in the Appendix.
\def1.1{1.1}
\begin{table}[ht!]
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[001] & (000) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} CC \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
\caption{Truth table for the case when a single environmental-type error occurs on qubit~0 (i.e., error mask: $[001]$), with any combination of entangling-type errors (i.e., error masks: $(000)$-$(111)$~). Outcome notation: C = Correct, CC = Correct via cancellation, F = Faulty, and R$_{\mathrm{PT}}$ = REJECT based on syndrome parity test. See text for complete table description.}
\label{tab:001_cases}
\end{table}
\def1.1{1.25}
We focus on the interesting case when there is a \emph{single} qubit affected by an environmental-type disturbance phenomenon (in column 5), detectable by a co-located sensor. We assume 100\% of environmental-type phenomena are detected by the co-located sensors, though this is not required for gaining utility from a sensor-assist method. The \textbf{Outcome} column of Table~\ref{tab:001_cases} presents the eight outcome cases when a single environmental disturbance occurs on \textit{qubit}$_0$, with any possible combination of entangling errors on the three qubits. An error bit-mask notation is used to uniquely identify each possible error case. For example, in our bit-mask notation [001]~(011) means an environmental disturbance has caused a bit-flip error on \textit{qubit}$_0$ and two entangling-type bit-flip errors have occurred on \textit{qubit}$_0$ and \textit{qubit}$_1$, the \textit{qubit} designations referring again to the quantum circuit in Figure~\ref{fig:QC-S111-E111-SingleCircuit}. This bit-mask notation is given in the \textbf{Errors} column of Table~\ref{tab:001_cases}.
Note we are \emph{not} assuming that only a single error occurs in the error channel. We take for granted that if the error probabilities are ``small,'' then the probability of multiple errors occurring will diminish greatly. For additional clarity, in addition to the error bit-mask identifiers, the \textbf{Gates} column in Table~\ref{tab:001_cases} presents the quantum gates for both types of errors, represented in columns 5~\&~6 in the quantum circuit (Fig.~\ref{fig:QC-S111-E111-SingleCircuit}). The assumption used in this report that all errors are bit-flip errors has an unintended consequence that two bit-flip errors can cancel if they both appear on the same qubit. Thus, the resultant gate for each of the three qubits is presented in Table~\ref{tab:001_cases}, where \textbf{\texttt{X}} is a bit-flip error gate and \textbf{\texttt{I}} is the identity gate (i.e., no error). This cancellation effect is an artifact of the unrealistic model of pure bit-flip errors.
For each error combination, the \textbf{Synd.} column in Table~\ref{tab:001_cases} provides the syndrome measurement (columns 10--15 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}) recorded in the ancilla classical register. In the lower right of Figure~\ref{fig:QC-S111-E111-SingleCircuit}, classical logic is used to assess if the combination of the co-located sensor and the parity tests performed in the syndrome measurement are consistent with a single error on the qubit associated with the co-located sensor reporting an environmental disturbance. Each unique error combination results in a specific outcome from the quantum circuit. If no errors of any kind occur, then the circuit returns the correct (C) quantum state. Likewise, if only a single error occurs of either type (environmental or entangling), again the circuit returns the correct outcome (C). In some cases, as we have mentioned, the bit-flip error induced by the environmental disturbance is canceled by an entangling error on the same qubit. In these cases, such as [001]~(001), the quantum circuit returns the correct outcome quantum state, but via a fortuitous cancellation, a ``correct via cancellation'' (CC) outcome state.
As the number of error occurrences in the error channel increase, the standard error correction code and the sensor-assisted code return different outcomes. This is the first notable conclusion: The sensor-assisted code only has an impact for cases when the quantum state has an uncorrectable error. In this way one intuits correctly that the classical information provided by a co-located sensor can't increase the number of correctly returned quantum states. However, the sensor-assist method can identify when an uncorrectable error has likely occurred, giving the user the opportunity to remove the calculation from further consideration in a computational effort.
To quantify these statements, we define several error probability notation terms, used in part in the probability (\textbf{Prob.}) column in Table~\ref{tab:001_cases}. In this column, $o$ is the probability of an environmentally-induced error and $p$ is the probability of an entangling-type error. The non-error complements are $\bar{o}=1-o$ and $\bar{p}=1-p$. We also define $\hat{P}=o+p-op$, the probability that at least one error occurred in the error channel (i.e., the combination of columns~5~\&~6 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}). Note $\hat{P}$ is \emph{not} the probability that a qubit is in an error state after the error channel gates have been applied (i.e., the combined action of columns~5~\&~6 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}). That is, $\hat{P}$ does not correspond to what one would measure as a qubit error rate except when the error is only either 100\% entangling-type or 100\% environmental-type. See the Appendix for a full derivation and definition of the terms $\hat{P}$, $o$, $p$, $\bar{o}$, and $\bar{p}$.
| 3,895 | 42,051 |
en
|
train
|
0.171.3
|
\def1.1{1.25}
We focus on the interesting case when there is a \emph{single} qubit affected by an environmental-type disturbance phenomenon (in column 5), detectable by a co-located sensor. We assume 100\% of environmental-type phenomena are detected by the co-located sensors, though this is not required for gaining utility from a sensor-assist method. The \textbf{Outcome} column of Table~\ref{tab:001_cases} presents the eight outcome cases when a single environmental disturbance occurs on \textit{qubit}$_0$, with any possible combination of entangling errors on the three qubits. An error bit-mask notation is used to uniquely identify each possible error case. For example, in our bit-mask notation [001]~(011) means an environmental disturbance has caused a bit-flip error on \textit{qubit}$_0$ and two entangling-type bit-flip errors have occurred on \textit{qubit}$_0$ and \textit{qubit}$_1$, the \textit{qubit} designations referring again to the quantum circuit in Figure~\ref{fig:QC-S111-E111-SingleCircuit}. This bit-mask notation is given in the \textbf{Errors} column of Table~\ref{tab:001_cases}.
Note we are \emph{not} assuming that only a single error occurs in the error channel. We take for granted that if the error probabilities are ``small,'' then the probability of multiple errors occurring will diminish greatly. For additional clarity, in addition to the error bit-mask identifiers, the \textbf{Gates} column in Table~\ref{tab:001_cases} presents the quantum gates for both types of errors, represented in columns 5~\&~6 in the quantum circuit (Fig.~\ref{fig:QC-S111-E111-SingleCircuit}). The assumption used in this report that all errors are bit-flip errors has an unintended consequence that two bit-flip errors can cancel if they both appear on the same qubit. Thus, the resultant gate for each of the three qubits is presented in Table~\ref{tab:001_cases}, where \textbf{\texttt{X}} is a bit-flip error gate and \textbf{\texttt{I}} is the identity gate (i.e., no error). This cancellation effect is an artifact of the unrealistic model of pure bit-flip errors.
For each error combination, the \textbf{Synd.} column in Table~\ref{tab:001_cases} provides the syndrome measurement (columns 10--15 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}) recorded in the ancilla classical register. In the lower right of Figure~\ref{fig:QC-S111-E111-SingleCircuit}, classical logic is used to assess if the combination of the co-located sensor and the parity tests performed in the syndrome measurement are consistent with a single error on the qubit associated with the co-located sensor reporting an environmental disturbance. Each unique error combination results in a specific outcome from the quantum circuit. If no errors of any kind occur, then the circuit returns the correct (C) quantum state. Likewise, if only a single error occurs of either type (environmental or entangling), again the circuit returns the correct outcome (C). In some cases, as we have mentioned, the bit-flip error induced by the environmental disturbance is canceled by an entangling error on the same qubit. In these cases, such as [001]~(001), the quantum circuit returns the correct outcome quantum state, but via a fortuitous cancellation, a ``correct via cancellation'' (CC) outcome state.
As the number of error occurrences in the error channel increase, the standard error correction code and the sensor-assisted code return different outcomes. This is the first notable conclusion: The sensor-assisted code only has an impact for cases when the quantum state has an uncorrectable error. In this way one intuits correctly that the classical information provided by a co-located sensor can't increase the number of correctly returned quantum states. However, the sensor-assist method can identify when an uncorrectable error has likely occurred, giving the user the opportunity to remove the calculation from further consideration in a computational effort.
To quantify these statements, we define several error probability notation terms, used in part in the probability (\textbf{Prob.}) column in Table~\ref{tab:001_cases}. In this column, $o$ is the probability of an environmentally-induced error and $p$ is the probability of an entangling-type error. The non-error complements are $\bar{o}=1-o$ and $\bar{p}=1-p$. We also define $\hat{P}=o+p-op$, the probability that at least one error occurred in the error channel (i.e., the combination of columns~5~\&~6 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}). Note $\hat{P}$ is \emph{not} the probability that a qubit is in an error state after the error channel gates have been applied (i.e., the combined action of columns~5~\&~6 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}). That is, $\hat{P}$ does not correspond to what one would measure as a qubit error rate except when the error is only either 100\% entangling-type or 100\% environmental-type. See the Appendix for a full derivation and definition of the terms $\hat{P}$, $o$, $p$, $\bar{o}$, and $\bar{p}$.
Looking again at Table~\ref{tab:001_cases}, if two or more \textbf{\texttt{X}}-gates appear in the resultant gate column, the standard bit-flip quantum error correction code will run to completion, but the returned quantum state will be faulty (F). The sensor-assisted method, however, is able to set a Parity Test REJECT flag (R$_{\mathrm{PT}}$) in half of the faulty cases.\footnote{Note the probability of these multiple error cases occurring in a real set of calculations is not half. That is, there are 64 possible error cases, but there is not equal probability weight of arriving at each of the 64 error cases for a set of calculations.} The computational advantage of the sensor-assist method comes from the fact that the Parity Test REJECT flag is set for cases when the number of ``small'' probability errors is low. To see this, consider the probability (\textbf{Prob.}) column in Table~\ref{tab:001_cases}, which shows as an exponent the number of errors occurring. The sum of the exponents of the $o$ and $p$ terms reveals the \emph{order} of ``small'' probability errors. By examining Table~\ref{tab:001_cases}, it is possible to see that whereas the standard error correction code permits faulty computations to pass through at an error-order of $2$ and higher, the sensor-assist code will only allow faulty computations to pass through at an error-order of $3$ or higher. This computational benefit does, however, come at the expense of also rejecting correct computations at an error-order of $3$ that are arrived at through fortuitous cancellations (CC). As a reminder, the fortuitous cancellation (CC) cases, are artifacts of the simplistic model of treating all errors as single bit-flips.
\begin{table*}[ht!]
\centering
\begin{tabular}{l|r|r|r|r|r|r|r|r|}
\multicolumn{1}{l}{} & \multicolumn{8}{c}{\textbf{Outcome fractions, $\mathcal{F}$, for various error probabilities, $\hat{P}$ and $p$}} \\
\hline
& \multicolumn{2}{c|}{$\hat{P}=0.20$, $p=0.20$} & \multicolumn{2}{c|}{$\hat{P}=0.20$, $p=0.12$} & \multicolumn{2}{c|}{$\hat{P}=0.05$, $p=0.03$} & \multicolumn{2}{c|}{$\hat{P}=0.05$, $p=0.01$} \\
Outcome case & Standard & Assisted & Standard & Assisted & Standard & Assisted & Standard & Assisted \\
\hline
Correct (C) &\ 0.8960\ &\ 0.8960\ &\ 0.8751\ &\ 0.8751\ &\ 0.9911\ &\ 0.9911\ &\ 0.9917\ &\ 0.9917\ \\
Correct via cancellation (CC) &\ 0.0000\ &\ 0.0000\ &\ 0.0312\ &\ 0.0209\ &\ 0.0018\ &\ 0.0017\ &\ 0.0012\ &\ 0.0011\ \\
Faulty (F) &\ 0.1040\ &\ 0.1040\ &\ 0.0937\ &\ 0.0331\ &\ 0.0071\ &\ 0.0025\ &\ 0.0071\ &\ 0.0003\ \\
Parity Test REJECT (R$_{\mathrm{PT}}$) &\ -\ &\ 0.0000\ &\ -\ &\ 0.0476\ &\ -\ &\ 0.0034\ &\ -\ &\ 0.0022\ \\
Sensor REJECT (R$_{\mathrm{S}}$) &\ -\ &\ 0.0000\ &\ -\ &\ 0.0233\ &\ -\ &\ 0.0013\ &\ -\ &\ 0.0047\ \\
\hline
Effective correct outcome $\rightarrow$ &\ 0.8960\ &\ 0.8960\ &\ 0.9063\ &\ 0.9644\ &\ 0.9929\ &\ 0.9974\ &\ 0.9929\ &\ 0.9997\ \\
\hline
\end{tabular}
\caption{Standard bit-flip quantum error correction outcome fractions compared to those from the sensor-assisted quantum circuit. Here $\hat{P}=o+p-op$ is the probability for any error to occur in the error channel (i.e., the combined columns~5~\&~6 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}). The entangling-type error (i.e., column~6 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}) has probability $p$ and the sensor detectable environmental disturbance-induced error (i.e., column~5 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}) has probability $o$. See the text body for further details and the Appendix for the derivation of the relationship between $\hat{P}$, $o$, and $p$.}
\label{tab:probabilities}
\end{table*}
Finally, Table~\ref{tab:probabilities} presents numerical values for several specific choices of error probability, parameterized by $\hat{P}$ and the entanglement error $p$. The values of the error probabilities are merely illustrative. There are four examples, and for each example, the standard bit-flip quantum error correction code is compared to the sensor-assisted code. We present the fractional weights of specific outcomes from the quantum circuit in Figure~\ref{fig:QC-S111-E111-SingleCircuit}, as described above in the explanation of Table~\ref{tab:001_cases}, with the addition of the Sensor REJECT (R$_{\mathrm{S}}$) cases (which appear in the full 64-combination tables in the Appendix).
From Table~\ref{tab:probabilities} we see several features. First, the fractional weight for the correct outcome (C) is always the same for the standard code and the sensor-assisted code, the ``intuition'' mentioned above. Second, when $\hat{P}=0.20$ and $p=0.20$, the environmental disturbance error probability is zero, so the two codes perform the same. Third, the key metric for determining the computational advantage is effective correct outcome fractional weight calculated as $(C+CC)/(C+CC+F)$. As a portion of the cases are removed from consideration by the logic of the sensor-assisted method, the denominator is lower than for the standard quantum correction code. The case $\hat{P}=0.20$ and $p=0.12$ is quoted in the abstract of this report. Fourth, as the overall scale of the error's fractional weighting decreases, the utility of the sensor-assist method decreases, as one would also intuitively expect.
| 3,099 | 42,051 |
en
|
train
|
0.171.4
|
\section{\label{sec:discussion}Discussion}
A number of assumptions were made in the foregoing analysis. It is valuable to explore the limitations these assumptions may impose on the results of this work. First and foremost, we assumed all quantum computation error types are of the bit-flip variety. In the case of using a co-located sensor for simply ``vetoing'' selected calculations in response to the detection of an environmental disturbance (Sec.~\ref{sec:example_error}), this choice of error-type is of no substantive consequence since any error type is still subject to the same ``veto'' of the entire computation. However, one might argue the errors present in the actual Yorktown backend calculation are not even discrete in nature. That is, our assumption of a bit-flip type error is effectively assuming the co-located sensor is responding to discrete events, like the interaction of an ionizing $\gamma$-ray in the chip substrate. If the environmental disturbances are of a continuous nature, it may be difficult to know when the co-located sensor is reporting a disturbance warranting rejection of the calculation instance. This could be assessed through empirical correlation studies to determine at what level of co-located sensor response it becomes beneficial to reject a specific calculation.
Perhaps more pointedly, even the standard textbook example three-qubit, bit-flip quantum error correction scheme \emph{presumes knowledge} of the error type. In other words our assumption of a bit-flip error type is entirely analogous to pedagogical presentations~\cite{10.5555/1972505} of a purely quantum method of error correction. We believe a key point is that if a co-located sensor's response is \emph{preferentially correlated} with a specific type of correctable error in the quantum calculation, then a sensor-assisted mitigation code implementation is likely fruitful. Furthermore, while not shown in this report, the quantum circuit developed in this report for use with co-located sensors also works for phase-flip errors when Hadamard gates are inserted on each computational qubit at what would be columns 4.5 and 9.5 in Figure~\ref{fig:QC-S111-E111-SingleCircuit}, as well as changing the error types in columns 5 and 6 to \textbf{\texttt{Z}}-gates.
Related to the exclusive use of bit-flip type errors in this report's analysis is the, as we have called them, ``fortuitous cancellations'' that arise as a natural (logical) consequence to the introduction of two independent error types within the error channel. We would readily agree with the reader that it seems highly unlikely that two such errors, of presumably very different phenomenological cause, would perfectly cancel each other on a single qubit. The specific case of concern would need evaluation in the framework of Table~\ref{tab:001_cases}.
In reality, the type of ``errors'' induced by ionizing radiation interactions in superconducting qubit devices is not entirely unknown. Our prior work~\cite{Oliver2020} has shown elevated levels of ionizing radiation results in increased quasiparticle density in the qubits' superconducting circuits. As quasiparticles tunnel through the Josephson junctions of, for example, transmon qubits, the parity of the quantum state flips. Thus, the appearance of parity transitions in transmon qubits due to tunneling of quasiparticles~\cite{ISI:000320589900109,PhysRevApplied.12.014052,PhysRevB.84.064517,PhysRevLett.121.157701} is a signature of energy injections due to ionizing radiation. The transitions rates of qubit relaxation and dephasing due to quasiparticle tunneling through Josephson junctions was previously investigated~\cite{ISI:000320589900109}.
In this report, we have presented simple methods of sensor-assisted fault mitigation in quantum computation. We anticipate sensor-assisted fault mitigation is possible within the frameworks of surface and stabilizer codes, though we have not explored those possibilities in any detail. Surface codes are potentially particularly interesting as it is easy to envision a physical surface array of single-qubit transmon chips, each containing a QET sensor. A high-quality chip-to-chip communication method would need development, but it is perhaps achievable through air-bridges or capacitive coupling elements in the circuits.
In this report, we have consistently had in mind ionizing radiation as representative of a class of environmental disturbing effects to superconducting transmon qubit systems. We proposed a specific sensor type---the QET---as a means for detecting these ionizing radiation specific environmental disturbances. At the present time, ionizing radiation is a minor contributor to quantum computational error. However, we note plans for future quantum computing systems, such as the ``Goldeneye'' million-qubit capable cryostat IBM is building~\cite{ScienceNews-Goldeneye}, are reaching the same physical scale as deep underground cryogenic research instruments~\cite{ALDUINO20199} that actively, passively, and in analysis work against ionizing radiation as a background to their experimental detection goals. The likelihood of ionizing radiation interactions occurring increases roughly linearly with the mass of the instrument, the total silicon chip substrate mass in the case of transmon qubit. Once the extraneous silicon chip substrate mass is minimized, the interaction likelihood of ionizing radiation within a single computational cycle will scale directly with the number of qubits (and duration of the computation). In this regime of large-scale qubit systems (and long duration computations) we believe the utility of sensor-assisted fault mitigation is likely to grow.
A key question is whether these considerations extend beyond ionizing radiation to other, more general, environmental disturbances. We believe the QET co-located sensor approach described in this report is applicable to most silicon chip-based superconducting Josephson junction qubit devices (e.g., flux, charge, and phase qubit varieties). However, the broader objective of the analysis presented in this report was to show the potential computational value achievable \emph{if} quantum computational error types are preferentially correlated with sensor-detectable environmental disturbances. For superconducting transmon qubit systems, other case types may include IR photon leakage sensing, vibration-induced energy coupling, and stray electric- or magnetic-field fluctuations. We are not in a position to speculate on analogous environmental disturbance error types and sensor combinations in other qubit modalities. We look to experts in the relevant disciplines to consider if the ideas presented in this report are transferable to other quantum computing systems.
During the final preparation of this report for submission, we were made aware of an article by J.M.~Martinis~\cite{martinis2020saving} which presented a model of ionizing radiation induced errors in superconducting qubits. Of interest to our own report, the Martinis article touches on error correction in the face of disturbances from ionizing radiation. In particular, Martinis states, ``if errors are large or correlated, bunching together either in time or across the chip in space, then error decoding fails.'' We concur with this assessment as it relates to disturbances from ionizing radiation and find the design solutions suggested by Martinis to be compelling. Our own suggested design solution, described in this report, is to place uniquely paired sets of a qubit and a sensor together on shared chip substrate. Communication via air-bridges, capacitive coupling, or other novel means for qubit-to-qubit interconnection is required to create a network of qubits for computation. In this way, we see no inconsistencies between the concepts presented in this report and the concepts presented by Martinis.
\section{\label{sec:summary}Summary}
In this report, we proposed hybrid superconducting device concepts for quantum computation. The inclusion of a co-located sensor on qubit substrates provides the potential to detect environmental disturbances causing errors in a quantum computation. In the simplest form, such co-located sensors provide a means to selectively ``veto'' and reject just those calculations where an environmental disturbance is likely to result in an incorrect calculation result. We showed the computational advantage of such a scheme and proposed device concepts that could implement such error mitigating techniques using proven device fabrication designs and methods.
We abstracted the co-located sensor concept to a scenario where every qubit has a uniquely assigned co-located sensor. We developed a formulation of the three-qubit, bit-flip quantum error correction code to take advantage of the co-located sensor's ability to detect environmental disturbances. The results demonstrated an enhanced effective quantum computational performance at the cost of the rejection of some calculation repetitions.
In both fault mitigation concepts considered in this report, the computational enhancements are numerically modest. Nevertheless, we believe these results recommend the development and investigation of a new class of superconducting quantum computation devices that include co-located sensors for the detection of environmental disturbances. We believe such devices are a potential new tool in the broad category of hybrid quantum-classical algorithm development and approaches to quantum error mitigation~\cite{endo2020hybrid}.
\begin{acknowledgments}
The concepts presented in this work stem from efforts underway at Pacific Northwest National Laboratory (PNNL). PNNL is a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy under Contract No. DE-AC05-76RL01830. The research results presented in this work were developed outside of any supporting grant or internal R\&D support mechanism and are associated with U.S. provisional patent application number~63/123,050 (9 December 2020). The authors thank Alexander Melville and Kyle Serniak (both at MIT Lincoln Laboratory) for answering questions regarding how superconducting qubits located on separate chip substrates might inter-communicate through superconducting air-bridges or capacitive coupling across gaps between chips, making some of the speculative device concepts we propose seem more plausible to the authors. We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. The authors thank Mark V. Raugas and Tobias J. Hagge (both at PNNL) for constructive comments on an early draft of this report. PNNL Information Release PNNL-SA-158581.
\end{acknowledgments}
\input{main.bbl}
\appendix
\begin{figure}
\caption{The possibilities for combining two independent random event types with individual probabilities of occurrence, $o$ and $p$. The values $(1-o)$ and $(1-p)$ are the individual probabilities for each random event type to \emph{not}
\label{fig:outcome_table}
\end{figure}
\section{Other sensor-assisted qubit concepts}
\begin{figure*}
\caption{Qubit and QET sensor concept.}
\label{fig:3-qubit-QET-scheme}
\caption{Scheme for 3 qubit and QET.}
\label{fig:3-qubit-QET-connections}
\caption{Three chips of 3 qubits each.}
\label{fig:3-chip-3-qubit}
\caption{Concepts for 3-qubit devices utilizing co-located QET sensors. The star symbol, $\star$, represents a chip-to-chip inter-communication point.}
\label{fig:multi-qubit-TES-scheme}
\end{figure*}
\begin{figure*}
\caption{The 9-qubit Shor code, left, with qubit groupings and alpha labels on the computational steps. To the right, qubit groupings with a physical arrangement similar to that shown in Figure~\ref{fig:3-chip-3-qubit}
\label{fig:3x3-physical}
\end{figure*}
\begin{figure*}
\caption{Similar to Fig~\ref{fig:3x3-physical}
\label{fig:3x3-logical}
\end{figure*}
Early in the development of the sensor-assisted quantum error correction concepts presented in this report, we explored integrating co-located sensors into the 9-qubit Shor code. We envisioned groups of qubits on shared substrates, monitored by co-located QET sensors; see Figure~\ref{fig:multi-qubit-TES-scheme}. We envisioned 3-qubit chips, in a grouping of three chips, to provide the 9~physical qubits needed for the Shor code. The concept would assume a standard Toffoli gate (CCNOT gate) implementation is available and that there is a means for interconnecting the three chips. Concepts for physical implementation in superconducting Josephson multi-qubit devices were recently explored in the literature~\cite{PhysRevA.101.022308}, and we believe chip-to-chip air bridges or capacitive coupling are future possibilities. Use of ancilla qubits is also likely in a practical implementation, though that was not considered in these initial concepts.
The 9-qubit Shor code contains several 3-fold symmetries we believed would prove advantageous for using co-located sensors to provide informed error correction coding. Figures~\ref{fig:3x3-physical} and~\ref{fig:3x3-logical} present these ideas. In each of the figures, the computational gates are assigned a designating letter (a--j) and are grouped within colored boxes. The qubits residing on the same chip share the same color (blue, green, or orange).
The qubits can be grouped in a set of three so that the chip-to-chip communication is minimized (Fig.~\ref{fig:3x3-physical}). However, one sees the signals from a co-located sensor will flag an entire sub-group of the qubits as potentially error prone, making the Shor code fail, in general. An alternative is to distribute the physical qubits across the Shor code (see Fig.~\ref{fig:3x3-logical}) but at the expense of having the majority of the multiple qubit gates require chip-to-chip communication. Worse, one sees once again a single co-located sensor event flags three qubits across the Shor code as potentially being in error. As the Shor code, in general, can only protect against two qubit errors, it was realized this approach was likely not fruitful.
From this analysis, we abandoned further development of error correction where a co-located sensor is assigned to more than one single (independent) qubit. However, we expect for specific computation implementations, there may yet be utility in considering symmetries within the computation to determine how to efficiently arrange co-located sensors whilst minimizing error prone qubit-to-qubit inter-communications.
\section{Statistics of a repeated calculation}
Here we present more statistics of the balanced Deutsch-Jozsa calculation presented in the main report. The key interest is related to whether the enhanced performance of the hypothesized sensor-assisted computation result is statistically significant. One hundred trials of 81,920 shots were conducted to determine the variation of the sample. Figure~\ref{fig:balancedDJ} presents the distributions of these one hundred trails of 81,920 shots. The Yorktown backend shows greater variability (Fig.~\ref{fig:balancedDJ}(a)~\&~(e)~), suggesting error inducing effects beyond simple Poisson statistical variation.\footnote{This is also likely a result from the IBM Q Experience's transpilation step for implementation of a quantum circuit on a specific backend as well as errors introduced solely in the measurement stage.} The noise model for the Yorktown backend (Fig.~\ref{fig:balancedDJ}(b)~\&~(f)~), however, shows Poissonian statistical variation, as expected for a fixed, deterministic simulation process. It is interesting to note the modeled noise for the Yorktown backend does not appear to closely match the results of the actual device, and in fact produced ``incorrect'' state outcomes in a larger fraction of calculations (i.e., Fig.~\ref{fig:balancedDJ}(f)~). As expected, the bit-flip-based error models (Fig.~\ref{fig:balancedDJ}(c,d)~\&~(g,h)~) show only Poissonian statistical variation as the errors are discrete in nature and follow a strict fixed probability of being introduced into the quantum circuit by construction. It is clear from these results the improvement provided by the use of the co-located sensor to ``veto'' some calculations does produce a statistically significant enhancement to the computational result when comparing the two bit-flip-based modeled error cases. That is, comparing Figure~\ref{fig:balancedDJ}(d)~to~(c) shows a greater fraction of correct outcome states, while comparing Figure~\ref{fig:balancedDJ}(h)~to~(g) shows a lower fraction of incorrect outcome states.
\begin{figure*}
\caption{Results from three implementations of a balanced Deutsch-Jozsa calculation (see Fig.~\ref{fig:balanced-dj-circuit}
\label{fig:balancedDJ}
\end{figure*}
| 4,002 | 42,051 |
en
|
train
|
0.171.5
|
\section{Probabilities in two error systems}
In this section we analyze, in an entirely generic way, the probability outcomes for two independent, random, bi-modal processes (see Figure~\ref{fig:outcome_table}) on three independent channels. Consider two independent random event processes, each having fixed probabilities, $o$ and $p$, for occurring in a given time period. We refer to these as Type-$o$ and Type-$p$ events in the context of this report. Initially, we make no assumptions about what these events represent. We are interested in detailing all possible ways these two independent events can occur in the given time period. None of the following discussion relies on any quantum mechanical assumptions whatsoever or any knowledge of the event type. There are only four possible cases, as presented in Figure~\ref{fig:outcome_table}.
Since Figure~\ref{fig:outcome_table} is complete and exhaustive of all possibilities, we can write two probability equations to represent the probability of at least one error occurring and the probability of no error occurring, respectively:
\begin{eqnarray}
P_{\mathrm{error}} & = & o \cdot p + o \cdot (1-p) + (1-o) \cdot p \\
& = & o + p - o p \equiv \hat{P}
\end{eqnarray}
\begin{eqnarray}
P_{\mathrm{no~error}} & = & (1-o) \cdot (1-p) \\
& = & 1 - p - o + o p \\
& = & 1 - ( o + p - o p ) = 1 - \hat{P}
\end{eqnarray}
Thus, for two independently drawn random bi-modal errors, the combined probability of an event is $\hat{P}$, while the probability of no event is $(1-\hat{P})$. In this work we will identify the Type-$o$ and Type-$p$ events with different sorts of errors induced on a qubit. We further assume the probability $(1-\hat{P})$ satisfies the quantum error correction requirement of being ``small'' (i.e., less than $0.5$). It should be noted that if $\hat{P}$ is small, then $o$ and $p$ must also both individually be small and therefore also less than $0.5$.
Now consider the outcome equation for three qubits, $\mathrm{q0}$, $\mathrm{q1}$, $\mathrm{q2}$, in one time period when an error may occur on any combination of qubits. We write this as,
\begin{eqnarray}
\mathbf{1}_{\mathrm{q0,q1,q2}} & = & \mathbf{1}_{\mathrm{q0}} \times \mathbf{1}_{\mathrm{q1}} \times \mathbf{1}_{\mathrm{q2}} \\
& = & \big( (1 - \hat{P}) + \hat{P} \big)_{\mathrm{q0}} \\
& & \times \big( (1 - \hat{P}) + \hat{P} \big)_{\mathrm{q1}} \\
& & \times \big( (1 - \hat{P}) + \hat{P} \big)_{\mathrm{q2}}
\end{eqnarray}
which exhausts all possible outcomes for the three qubits. We further expand this outcome equation to highlight the individual Type-$o$ and Type-$p$ errors, making the compacting notation adjustments, $\bar{o}=(1-o)$ and $\bar{p}=(1-p)$,
\begin{eqnarray}
\mathbf{1}_{\mathrm{q0,q1,q2}} & = & \big( \bar{o} \bar{p} + o p + o \bar{p} + \bar{o} p \big)_{\mathrm{q0}} \\
& & \times \big( \bar{o} \bar{p} + o p + o \bar{p} + \bar{o} p \big)_{\mathrm{q1}} \\
& & \times \big( \bar{o} \bar{p} + o p + o \bar{p} + \bar{o} p \big)_{\mathrm{q2}}
\end{eqnarray}
Given the assumption in this report that all error types are bit-flip errors, the terms $o p$ have special significance in that the two errors on a single qubit will cancel out. Thus, we add a notation, $\bar{c}$, representing when errors cancel out:
\begin{eqnarray}
\mathbf{1}_{\mathrm{q0,q1,q2}} & = & \big( \bar{o} \bar{p} + \bar{c} + o \bar{p} + \bar{o} p \big)_{\mathrm{q0}} \\
& & \times \big( \bar{o} \bar{p} + \bar{c} + o \bar{p} + \bar{o} p \big)_{\mathrm{q1}} \\
& & \times \big( \bar{o} \bar{p} + \bar{c} + o \bar{p} + \bar{o} p \big)_{\mathrm{q2}}
\end{eqnarray}
At this point we assume the Type-$o$ and Type-$p$ errors, while independent between the three qubits, come from the same physical source types and have the same probability values (i.e., $o = o_{\mathrm{q0}}=o_{\mathrm{q1}}=o_{\mathrm{q2}}$ and $p = p_{\mathrm{q0}}=p_{\mathrm{q1}}=p_{\mathrm{q2}}$). Thus, multiplying through the outcome equation and regrouping terms associated with order of $\bar{o}$-factors, we arrive at:
\begin{eqnarray}
\mathbf{1}_{\mathrm{q0,q1,q2}} & = & \bar{o}^3 \left(3 p \bar{p}^2+\bar{p}^3\right) \\
& & +\ \bar{o}^3 \left(p^3+3 p^2 \bar{p}\right) \\
& & +\ \bar{o}^2 \left(6 p \bar{c} \bar{p}+3 \bar{c} \bar{p}^2+3 o \bar{p}^3\right) \\
& & +\ \bar{o}^2 \left(3 p^2 \bar{c}+3 o p^2 \bar{p}+6 o p \bar{p}^2\right) \\
& & +\ \bar{o} \left(3 p \bar{c}^2+3 \bar{c}^2 \bar{p}+6 o \bar{c} \bar{p}^2\right) \\
& & +\ \bar{o} \left(6 o p \bar{c} \bar{p}+3 o^2 p \bar{p}^2+3 o^2 \bar{p}^3\right) \\
& & +\ \bar{c}^3+3 o \bar{c}^2 \bar{p} \\
& & +\ 3 o^2 \bar{c} \bar{p}^2+o^3 \bar{p}^3
\end{eqnarray}
We identify Type-$o$ errors with environmental, sensor-detectable errors and Type-$p$ errors with entanglement type errors, which cannot be detected. We regroup the terms related to whether the errors are correctable (C), faulty (F), or correctable via cancellation (CC) as well as the distinction of whether the sensor-assist either outright REJECTs the calculation (R$_{\mathrm{S}}$) or sets the REJECT flag based on the syndrome parity test (R$_{\mathrm{PT}}$). These amount to the fractions, $\mathcal{F}$, of cases of each kind.
\begin{eqnarray}
\mathcal{F}_{\mathrm{C\textit{vs.}C}} & = & \bar{o}^3 \left(3 p \bar{p}^2+\bar{p}^3\right) + \bar{o}^2 \left(3 o \bar{p}^3\right) \\
\mathcal{F}_{\mathrm{CC\textit{vs.}CC}} & = & \bar{o}^2 \left(3 \bar{c} \bar{p}^2\right) \\
\mathcal{F}_{\mathrm{F\textit{vs.}F}} & = & \bar{o}^3 \left(p^3+3 p^2 \bar{p}\right) \\
& & +\ \bar{o}^2 \left(3 p^2 \bar{c}+3 o p^2 \bar{p}\right) \\
\mathcal{F}_{\mathrm{CC\textit{vs.}R}_{\mathrm{PT}}} & = & \bar{o}^2 \left(6 p \bar{c} \bar{p}\right) \\
\mathcal{F}_{\mathrm{F\textit{vs.}R}_{\mathrm{PT}}} & = & \bar{o}^2 \left(6 o p \bar{p}^2\right) \\
\mathcal{F}_{\mathrm{CC\textit{vs.}R}_{\mathrm{S}}} & = & \bar{o} \left(3 p \bar{c}^2+3 \bar{c}^2 \bar{p}+6 o \bar{c} \bar{p}^2\right) \\
& & +\ \bar{c}^3+3 o \bar{c}^2 \bar{p} \\
\mathcal{F}_{\mathrm{F\textit{vs.}R}_{\mathrm{S}}} & = & \bar{o} \left(6 o p \bar{c} \bar{p}+3 o^2 p \bar{p}^2+3 o^2 \bar{p}^3\right) \\
& & +\ 3 o^2 \bar{c} \bar{p}^2+o^3 \bar{p}^3
\end{eqnarray}
An alternative means for presenting the outcomes is through the truth table of all 64 possible error combinations. This is presented in Tables~\ref{tab:all_64_cases_I}~\&~\ref{tab:all_64_cases_II}. For specific numerical cases, see Figure~\ref{fig:eff_fault}, which shows the effective fault rate (fraction of faults in unrejected calculations, i.e., $ \mathcal{F}_{\mathrm{F\textit{vs.}F}} / ( \mathcal{F}_{\mathrm{F\textit{vs.}F}} + \mathcal{F}_{\mathrm{C\textit{vs.}C}} + \mathcal{F}_{\mathrm{CC\textit{vs.}CC}} ) $ calculated from equations C24-C27).
\begin{figure}
\caption{Fraction of calculations containing faults that are not rejected by combined information of the co-sensors and parity registers, as a function of the total per-qubit error rate $\hat{P}
\label{fig:eff_fault}
\end{figure}
\def1.1{1.1}
\begin{table*}[h]
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[000] & (000) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{0} \cdot p^{0}$ & \phantom{AC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{3} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{0} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{3} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{0} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{3} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{0} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{3} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{0} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{3} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{0} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{3} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{0} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{3} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{0} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{3} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
| 4,002 | 42,051 |
en
|
train
|
0.171.6
|
\begin{figure}
\caption{Fraction of calculations containing faults that are not rejected by combined information of the co-sensors and parity registers, as a function of the total per-qubit error rate $\hat{P}
\label{fig:eff_fault}
\end{figure}
\def1.1{1.1}
\begin{table*}[h]
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[000] & (000) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{0} \cdot p^{0}$ & \phantom{AC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{3} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{0} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{3} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{0} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{3} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{0} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{3} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{0} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{3} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{0} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{3} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{0} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{3} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{0} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{3} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[001] & (000) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} CC \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
| 2,940 | 42,051 |
en
|
train
|
0.171.7
|
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[001] & (000) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} CC \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[010] & (000) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} CC \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
| 2,878 | 42,051 |
en
|
train
|
0.171.8
|
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[010] & (000) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} CC \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[100] & (000) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} CC \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
\caption{See main report and Table~\ref{tab:001_cases} for description of tables.}
\label{tab:all_64_cases_I}
\end{table*}
| 2,929 | 42,051 |
en
|
train
|
0.171.9
|
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[100] & (000) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & C \textit{vs.} C \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} CC \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} F \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
\caption{See main report and Table~\ref{tab:001_cases} for description of tables.}
\label{tab:all_64_cases_I}
\end{table*}
\begin{table*}[h]
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[011] & (000) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
| 2,967 | 42,051 |
en
|
train
|
0.171.10
|
\begin{table*}[h]
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[011] & (000) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[101] & (000) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
| 2,948 | 42,051 |
en
|
train
|
0.171.11
|
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[101] & (000) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[110] & (000) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
| 2,937 | 42,051 |
en
|
train
|
0.171.12
|
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[110] & (000) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
\small
\centering
\begin{tabular}{|cc|ccc|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\
\multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\
\multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\
\hline \hline
~[111] & (000) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{3} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{0} \cdot \bar{p}^{3}$ & \\ \hline
& (001) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{3} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{0} \cdot \bar{p}^{2}$ & \\ \hline
& (010) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{3} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{0} \cdot \bar{p}^{2}$ & \\ \hline
& (100) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{3} \cdot p^{1}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{0} \cdot \bar{p}^{2}$ & \\ \hline
& (011) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{3} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{0} \cdot \bar{p}^{1}$ & \\ \hline
& (101) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{3} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{0} \cdot \bar{p}^{1}$ & \\ \hline
& (110) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{3} \cdot p^{2}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{0} \cdot \bar{p}^{1}$ & \\ \hline
& (111) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{3} \cdot p^{3}$ & \\
& & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\
& & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{0} \cdot \bar{p}^{0}$ & \\ \hline
\end{tabular}
\caption{See main report and Table~\ref{tab:001_cases} for description of tables. Here R$_{\mathrm{S}}$ = REJECT based on sensors.}
\label{tab:all_64_cases_II}
\end{table*}
\end{document}
| 3,011 | 42,051 |
en
|
train
|
0.172.0
|
\begin{document}
\title
[Survival of dominated strategies]
{Survival of dominated strategies under imitation dynamics}
\author
[P.~Mertikopoulos]
{Panayotis Mertikopoulos$^{\ast}$}
\address{$^{\ast}$\,
Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, LIG, 38000 Grenoble, France}
\EMAIL{[email protected]}
\author
[Y.~Viossat]
{Yannick Viossat$^{\diamond,\sharp}$}
\address{$\diamond$\,
CEREMADE, Université Paris Dauphine-PSL, Place du Maréchal de Lattre de Tassigny, F-75775 Paris, France
}
\address{$\sharp$\,
Corresponding author}
\EMAIL{[email protected]}
\subjclass[2020]{Primary: 91A22, 91A26.}
\keywords{
Evolutionary game theory;
evolutionary game dynamics;
imitation;
dominated strategies;
survival;
rationality.}
\thanks{This article is dedicated to the memory of Bill Sandholm, who, had he lived, would have been a co-author of this work.
We thank him, Vianney Perchet, Jorge Pe\~{n}a, seminar audiences, and two anonymous reviewers for helpful comments.}
\begin{abstract}
The literature on evolutionary game theory suggests that pure strategies that are strictly dominated by other pure strategies always become extinct under imitative game dynamics,
but they can survive under innovative dynamics. As we explain, this is because innovative dynamics favour rare strategies while standard imitative dynamics do not. However, as we
also show, there are reasonable imitation protocols that favour rare or frequent strategies, thus allowing strictly dominated strategies to survive in large classes of imitation dynamics.
Dominated strategies can persist at nontrivial frequencies even when the level of domination is not small.
\end{abstract}
\maketitle
\allowdisplaybreaks
\section{Introduction}
\label{sec:intro}
Many economic models assume that the agents they consider are rational.
This may be defended as a reference case or for tractability.
A more interesting justification is that, at least in tasks that they perform routinely, and for which they have enough time to experiment, even weakly rational agents should come to learn which strategies do well, and behave eventually \emph{as if} they were rational.
The same intuition applies to other evolutionary processes, such as natural selection or imitation of successful agents. But does evolution really wipe out irrational behaviors?
A simple way to tackle this question in a game-theoretic context is to study whether evolutionary game dynamics wipe out dominated strategies, in the sense that the frequency of these strategies goes to zero as time goes to infinity. This may be interpreted in several ways, depending on whether domination means weak or strict domination, whether the strategies considered are pure or mixed, and the dynamics deterministic or stochastic (see Viossat, 2015 \cite{11}, for a partial survey).
We focus here on what we see as the most basic question:
\emph{do pure strategies that are strictly dominated by other pure strategies become extinct under deterministic dynamics in continuous time?}
The answer of the literature is mixed.
Roughly speaking, evolutionary game dynamics may be classified as imitative or innovative.
In imitative dynamics, modeling imitation processes or pure selection (without mutation),
strategies that are initially absent from the population never appear.
The leading example is the replicator dynamics.
In innovative dynamics, strategies initially absent from the population may appear.
Examples include the best-reply dynamics (and smoothened versions of it), the Brown-von Neumann-Nash dynamics, the Smith dynamics, the projection dynamics, and others.
The literature shows that imitative dynamics (in the sense of Sandholm, 2010 \cite{10}) always eliminate pure strategies strictly dominated by other pure strategies (Akin, 1980 \cite{1}; Nachbar, 1990 \cite{7}), while innovative dynamics need not do so, with the notable exception of the best-reply dynamics.
Indeed, building on Berger and Hofbauer (2006) \cite{2}, Hofbauer and Sandholm (2011) \cite{5} show that for all dynamics satisfying four natural conditions called Innovation, Continuity, Nash Stationarity and Positive Correlation, there are games in which pure strategies strictly dominated by other pure strategies survive in relatively high proportion.
Moreover, their simulations show that, at least for some well-known dynamics, dominated strategies may survive at non-negligible frequencies even
when the difference in payoff between the dominated and dominating strategies is relatively important.
Thus, with respect to elimination of dominated strategies, there seems to be a sharp contrast between imitative and innovative processes.
This paper argues that this is not the case.
As we shall explain, the intuitive reason why innovative dynamics allow for survival of dominated strategies is that they give an edge to rare strategies.
Indeed, the \emph{Innovation} property of Hofbauer and Sandholm stipulates that if a strategy is an unplayed best-response to the current population state, then it should appear in the population: technically, the derivative of its frequency should be positive.
The per-capita growth rate of its frequency is then infinite.
Moreover, the \emph{Continuity} property requires that the dynamics depends smoothly on the payoffs of the game and the population state.
Taken together, these two properties imply that rare strategies that are almost-best replies to the current population state have a huge per-capita growth rate, potentially higher than strategies that have a slightly better payoff, but are more frequent.
In this sense, Hofbauer and Sandholm's dynamics favour rare strategies.
When a dominated strategy becomes rare, this advantage to rarity may compensate the fact of being dominated and allows it to survive.
By contrast, in imitative dynamics, the per-capita growth rates of pure strategies are always ordered as their payoffs,
irrespective of their frequencies in the population. But we feel that this is, in some sense, an artifact, a legacy of the history of evolutionary game theory.
Indeed, imitative dynamics arose as variants of the replicator dynamics, which originated as a natural selection model, and was only a posteriori reinterpreted as an imitation model. Ironically, their rationality properties come from their biological interpretation.
But if we consider a priori which dynamics could arise from an imitation protocol, then we arrive quite naturally at dynamics that provide an evolutionary advantage to rare strategies (or frequent strategies) in a sense that we will make clear. As in innovative dynamics, this advantage to rarity (or commonness) may offset the fact of being dominated, hence allowing dominated strategies to survive.
More precisely, imitative dynamics may be derived through a two-step imitation protocol. In the first step, an agent (henceforth, the \emph{revising agent}) meets another individual (the \emph{mentor}) uniformly at random.
In an infinite population, the probability that the mentor plays a given strategy is thus equal to the frequency of this strategy. In the second step, the revising agent decides to adopt the mentor's strategy or to
keep his own. The adoption rule depends on the dynamics but satisfies a monotonicity condition. Roughly, the probability of switching is larger if the revising agent's payoff is low, the mentor's payoff is large,
or both. This leads to dynamics that coincide with Nachbar's (1990) \cite{7} monotone dynamics: if strategy $i$ has a larger current payoff than strategy $j$, then its frequency has a larger per-capita growth-rate.
We thus suggest to call them \emph{monotone imitative dynamics}.\footnote{We thank an anonymous reviewer for suggesting this name.}
To motivate more general, non-monotone imitative dynamics, we consider revision protocols where the second step satisfies the standard monotonicity condition, but the first step is modified. Instead of always meeting a single other individual, a revising agent sometimes meets several. There are then many reasonable ways of choosing a mentor (or depending on the interpretation, a strategy to be potentially imitated). The probability of envisioning to switch to a given strategy may then be lower or higher than the frequency of this strategy, in a way that may systematically favour rare or frequent strategies. This leads to dynamics that are no longer monotone in the sense of Nachbar (1990) \cite{7}, and under which dominated strategies may survive. Jorge Pen\~a brought to our attention that similar phenomena have been studied in the literature on the evolution of cooperation. In particular, a conformist bias may allow cooperation to survive in the prisoner's dilemma (e.g., Boyd and Richerson, 1988 \cite{3}; Heinrich and Boyd, 2001 \cite{4}; Pe\~na et al., 2009 \cite{8}; and references therein).
We first illustrate these ideas on dynamics derived from imitation protocols based on adoption of successful strategies or departure from less successful ones,
but not on direct comparison between the payoff of an agent's current strategy and of the strategy he envisions to adopt. With such protocols, agents keep switching
from a strategy to another even when all strategies earn the same payoff.
For this reason, an advantage to rare or frequent strategies always bites, and dominated strategies may survive even in games with only two strategies.
The argument is simple: if the two strategies are twins, that is, always earn the same payoffs, then in the case of an advantage to rare strategies, the shares of both strategies tend to become equal. Technically, the population state where both strategies are played with probability 1/2 is globally asymptotically stable. If we penalize sufficiently little one of the strategies, to make it dominated, most solutions still converge to one or several rest points in the neighborhood of this population state, in which the dominated strategy is played with positive probability.
Of course, these rest points cannot be Nash equilibria. This reveals that the dynamics we just mentioned do not satisfy the evolutionary folk theorem (see, e.g., Weibull, 1995 \cite{12}). They do not satisfy either the Positive Correlation condition, which stipulates that there is a positive correlation between the growth rates of strategies and their payoffs (or, equivalently, that against a constant environment, the average payoff in the population increases). Our main result is to show that survival of dominated strategies also occurs under dynamics that are derived from imitation protocols based on payoff comparison,
and that satisfy both the evolutionary folk theorem and an appropriate version of Positive Correlation. We show that this is the case as soon as they also satisfy the Continuity condition of Hofbauer and Sandholm and two additional conditions: Imitation, and Advantage to Rarity. The former requires that, except at Nash equilibria, a strategy which is currently played must be abandoned by some agents or imitated by others (or both). The latter assumes that if two strategies are twins, then the rarer one has a per-capita growth-rate that is no lower than the per-capita growth-rate of the more frequent one, and strictly higher in some precise circumstances. The Advantage to Rarity condition may be replaced by a similar Advantage to Frequency. We provide a number of imitation protocols leading to dynamics satisfying these assumptions.
Under these dynamics, if a solution converges to a rest point, this point must be a Nash equilibrium, hence put a zero weight on all strictly dominated strategies.
Therefore, to prove that dominated strategies may survive, we need to consider games where solutions cycle. We consider the same game as Hofbauer and
Sandholm, the hypnodisk game with a feeble twin, and use similar arguments, with some twists.
We check via simulations that dominated strategies can also survive in more standard games, such as a Rock-Paper-Scissors game augmented by a feeble twin
of Scissors, as also considered by Hofbauer and Sandholm. Finally, we show that simpler examples of survival of dominated strategies can be given if we depart
from single population dynamics and consider a population of agents facing an environment which oscillates for exogeneous reasons.
The remainder of this article is organized as follows. Evolutionary dynamics are introduced in \cref{sec:EvolDyn}. \cref{sec:ImProc} describes
imitation processes favouring rare strategies or frequent strategies. \cref{sec:simple} gives a simple example of survival of dominated strategies under
dynamics based on protocols known as the imitation of success, or imitation driven by dissatisfaction. \cref{sec:paycomp} states our main results: that
survival of dominated strategies also occurs for imitation dynamics based on payoff comparison, and for any imitation dynamics satisfying some natural conditions,
on top of favouring rare or frequent strategies. The result is proved in \cref{sec:proof}. \cref{sec:disc} concludes. \cref{app:proofs} gathers
some proofs. \cref{app:moreprot} discusses more general imitation protocols than those described in the main text.
Finally, \cref{app:unilateral} gives simple examples of survival of dominated strategies under dynamics based on payoff comparison in
a population playing against an ad-hoc environment.
| 3,114 | 31,513 |
en
|
train
|
0.172.1
|
\section{Evolutionary dynamics}
\label{sec:EvolDyn}
With the exception of \cref{app:unilateral}, we focus on single-population dynamics.
There is a single, unit mass population of agents.
These agents may choose any pure strategy in the set $I = \{1, \dotsc, N\}$.
The frequency of strategy $i$ at time $t$ is denoted by $x_i(t)$.
The vector $x(t) = (x_i(t))_{i \in I}$ of these frequencies is called the population state at time $t$.
It belongs to the simplex $X = \{x \in \mathbb{R}^N_+, \sum_{i \in I} x_i = 1\}$.
The payoff for an agent playing strategy $i$ when the population state is $x$ is denoted by $F_i(x)$.
The vector $F(\cdot)=(F_1(\cdot),\dotsc,F_N(\cdot)) : X \to \mathbb{R}^N$
is called the game's payoff function. We frequently identify a (symmetric two-player) game and its payoff function.
We are interested in evolutionary dynamics of the form $\dot{x}= V^F(x)$, with $V^F$ Lipschitz continuous in $x$, to ensure existence and uniqueness of solutions through a given initial condition. Thus, the population state evolves as a function of the current state and the payoffs of the game. The vector field $V^F$ is assumed to depend continuously on the game's payoff function $F$.\footnote{To fix ideas, we use the sup norm on the space of payoff functions: $||F|| = \sup_{x \in X, i \in I} |F_i(x)|$, and again the sup norm $||(F,x)|| = \max(||F||, ||x||)$ to define joint continuity in $(F, x)$. This is not essential.}
A well-known example is the replicator dynamics:
\begin{equation}
\label{eq:rep}
\dot x_i(t) = x_i(t) \left[F_i(x(t)) - \bar{F}(x(t))\right]
\end{equation}
where $\bar{F}(x(t))= \sum_{i \in I} x_i(t) F_i(x(t))$ is the average payoff in the population.
We often omit to specify that the payoffs depend on the state, which depends on time.
Thus, instead of \eqref{eq:rep}, we write: $\dot x_i = x_i (F_i - \bar{F})$.
Pure strategy $i$ is strictly dominated by pure strategy $j$ if for all $x$ in $X$, $F_i(x) < F_j(x)$.
Pure strategy $i$ goes extinct, along a given solution of given dynamics, if $x_i(t) \to 0$ as $t \to +\infty$.
We want to understand under which dynamics pure strategies strictly dominated by other pure strategies always go extinct, at least for initial conditions in which all strategies are initially present, that is, in the relative interior of the simplex $X$.
Before introducing imitative and innovative dynamics, let us explain a standard way to derive dynamics from micro-foundations.
The idea is that from time to time agents revise their strategies.
Due to this revision process, agents playing strategy $i$ switch to strategy $j$ at a certain rate, which depends on the population state and on the payoffs of the game.
We denote this rate by $\rho_{ij}(x, F)$, or simply $\rho_{ij}$ to keep formulas light. Thus, between time $t$ and $t+ dt$, a mass $x_i \rho_{ij} dt$ of agents switch from $i$ to $j$, and a mass $x_j \rho_{ji} dt$ switch from $j$ to $i$.
This leads to the ``mother equation":
\begin{equation}
\label{eq:mother}
\dot x_i = \sum_{j \neq i} x_j \rho_{ji} - x_i \sum_{j \neq i} \rho_{ij}
\end{equation}
where the first term is an inflow term (agents starting to play strategy $i$) and the second term an outflow
term (agents abandoning strategy $i$).\footnote{As the terms $i=j$ cancel, Eq. \eqref{eq:mother} may also be written as follows:
\[\dot x_i = \sum_{j \in I} x_j \rho_{ji} - x_i \sum_{j \in I} \rho_{ij} \]}
A specification of the rates $\rho_{ij}$ for all $(i, j)$ in $I \times I$ is called a revision protocol and defines dynamics.
The replicator dynamics for instance may be derived from at least three different protocols.
\begin{itemize}
\item (imitation of success) $\rho_{ij} = x_j (K + F_j(x))$, where $K$ is a constant large enough to ensure that $K+ F_j(x)$ is positive for all strategies $j$ in $I$ and all states $x$ in $X$.
\item (imitation driven by dissatisfaction) $\rho_{ij} = x_j (K - F_i(x))$, with $K > F_i(x)$ for all $i$ in $I$ and all $x$ in $X$.
\item (proportional pairwise imitation rule) $\rho_{ij} = x_j [F_j- F_i]_+$, where for any real number $a$, $[a]_+=\max(a, 0)$.
\end{itemize}
These three protocols model two-step processes: first, a revising agent meets another agent uniformly at random, hence playing $j$ with probability $x_j$;
second, he imitates her with a probability that depends on the payoff of this agent's strategy, his own, or a comparison of both.\footnote{We use ``He" for the revising agent, and ``She" for the agent being imitated.}
\emph{Imitative dynamics.} More generally, Sandholm (2010) \cite{10} calls dynamics imitative if they may be derived from a revision protocol of the form \[\rho_{ij} (F,x) = x_j r_{ij}(F, x)\]
with for all $x$ in $X$, all strategies $i, j, k$ in $I$:
\begin{equation}
\label{eq:monotonicity}
F_i(x) < F_j(x) \Leftrightarrow r_{kj}(F,x) - r_{jk}(F,x) > r_{ki}(F, x) - r_{ik}(F,x)
\end{equation}
As the replicator dynamics, these dynamics may be seen as modeling a two-step process where, in step 1, a revising agent meets another agent from the population at random, and in step 2, decides to imitate her or not.
Condition \eqref{eq:monotonicity} is a monotonicity condition.
It means that in step 2, the difference between the conditional imitation rates from $k$ to $i$ and from $i$ to $k$ increases with the payoff of strategy $i$. In particular, if strategy $j$ earns more than strategy $i$, then in step 2, an agent playing strategy $i$ is more likely to adopt $j$ than an agent playing $j$ is to adopt $i$.
It is easy to see that imitative dynamics coincide with a class of dynamics known as monotone dynamics (Viossat, 2015 \cite{11}, footnote 6).
These are dynamics of the form
\[\dot{x}_i=x_i g_i(x)\]
with $g_i$ Lipschitz continuous and, for all $x \in X$, and all $(i, j)$ in $I \times I$,
\begin{equation*}
g_{i}(x) < g_{j}(x) \Leftrightarrow F_{i}(x) < F_{j}(x).
\end{equation*}
It follows that in imitative dynamics, per-capita growth rates of pure strategies are ordered as their payoffs.
As a result, pure strategies strictly dominated by other pure strategies are always eliminated (Akin, 1980 \cite{1}; Nachbar, 1990 \cite{7}; Samuelson and Zhang, 1992 \cite{9}; Hofbauer and Weibull, 1996 \cite{5}).
To distinguish them from more general imitation processes that we will consider, we refer to these dynamics as \emph{monotone imitative dynamics}. This monotone character does not only derive from the monotonicity condition \eqref{eq:monotonicity}, but also from the assumption that the probability of envisioning to adopt a given strategy is equal to the frequency of this strategy.
\emph{Innovative dynamics.} By contrast with imitative dynamics, in innovative dynamics, strategies that are not initially played may appear.
A leading example is the Smith dynamic:
\begin{equation}
\label{eq:Smith}
\dot{x_i} = \sum_{j \in I} x_j [F_i(x) - F_j(x)]_+ - x_i \sum_{i \in I} [F_j(x) - F_i(x)]_+
\end{equation}
It may be derived by assuming that, first, revising $i$-strategists\footnote{An $i$-strategist is an agent currently using strategy $i$.} pick a strategy $j$ uniformly at random in the list of possible strategies, and second, adopt it with probability proportional to $[F_j - F_i]_+$.
This leads to $\rho_{ij} = \frac{1}{N} [F_j - F_i]_+$.
This is similar to the proportional pairwise imitation rule defining the replicator dynamics, except that in the first step, strategy $j$ is selected as a candidate new strategy with probability $1/N$ instead of $x_j$.\footnote{In Eq.\eqref{eq:Smith}, as standard, we omitted the factor $1/N$, which only affects the time-scale.}
Other well known innovative dynamics are the Brown-von Neumann-Nash dynamics, or BNN:
\begin{equation*}
\dot x_i = \left[F_i(x) - \bar{F}(x)\right]_+ - x_i \sum_{k \in I} [F_k(x) - \bar{F}(x)]_+
\end{equation*}
They model a two-step process where, in step 1, revising $i$-strategists pick a strategy $j$ uniformly at random in the list of possible strategies, and, in step 2, adopt it with probability proportional to $[F_j - \bar{F}]_+$, where $\bar{F}(x)=\sum_i x_i F_i(x)$ is the average payoff in the population.
\emph{Innovative Dynamics favour rare strategies, monotone imitative dynamics do not.} Building on Berger and Hofbauer (2006) \cite{2}, Hofbauer and Sandholm (2011) \cite{5} showed that for the Smith and BNN dynamics, and many others, there are games in which a pure strategy strictly dominated
by another pure strategy survives, for most initial conditions. This holds for any dynamics satisfying four natural requirements, called \emph{Innovation},
\emph{Continuity}, \emph{Positive Correlation} and \emph{Nash Stationarity}. As explained in the introduction, the intuition is that, taken together, Innovation and Continuity favour rare strategies, in the sense that a rare strategy can have a higher per-capita growth-rate than a better but more frequent strategy.
By contrast, monotone imitative dynamics favour neither rare nor frequent strategies: they are neutral. Under monotone imitative dynamics,
if the payoff of strategy $i$ is less than the payoff of strategy $j$, then its per-capita growth rate is less than that of strategy $j$. This is true whatever the frequencies of strategies $i$ and $j$. The reason is not that this property is completely natural. Indeed, it does not hold for innovative dynamics. Rather, this is because the imitation processes modeled
by monotone imitative dynamics are of a particular kind, inspired by the replicator dynamics. It is actually easy to imagine dynamics modeling imitation processes
but advantaging rare strategies, or frequent ones.\footnote{Of course, such dynamics, though modeling imitation processes, do not satisfy Sandholm's definition of
imitative dynamics. This is the key-point: this definition of imitative dynamics does not encompass all reasonable imitation processes.} For these dynamics, as for
innovative dynamics, the advantage given to rare (or frequent) strategies should be able to offset the fact of being strictly dominated, allowing for survival of dominated strategies.
This is what we show.
We begin by providing examples of imitation dynamics favouring rare or frequent strategies. They are all based on the idea that instead of deciding to change his strategy or not upon meeting only one other agent, a revising agent might meet several other agents before taking his decision.
| 3,142 | 31,513 |
en
|
train
|
0.172.2
|
\section{Imitation processes advantaging rare or frequent strategies}
\label{sec:ImProc}
\subsection{Examples} Loosely speaking, dynamics favour rare strategies if, when strategies $i$ and $j$ earn the same payoff but strategy $i$ is rarer, strategy $i$ has a higher per-capita growth rate than strategy $j$.
To see how this could arise in an imitation process, consider revision protocols of the form:
\begin{equation}
\label{eq:gen2step}
\rho_{ij} (F, x)= p_{ij} (F, x) r_{ij} (F, x), \text{ with } p_{ij}(F, x) = \lambda_{ij}(F, x) x_j
\end{equation}
for some positive functions $\lambda_{ij}$.
This models a two-step process: in step 1, a revising $i$-strategist gets interested
in strategy $j$ with a probability $p_{ij}$ that we call a \emph{selection rate}.
We allow it to depend on both payoffs and frequencies, but in our main examples,
it depends only on strategy frequencies; in step 2, he adopts strategy $j$ with a
probability proportional to a quantity $r_{ij}$ that depends on payoff considerations,
and that we call an \emph{adoption rate}.\footnote{We allow these adoption rates to
depend on both payoffs and frequencies as we want to allow for protocols comparing
one's current payoff to, e.g., the average payoff in the population, which the vector
$F(x)$ alone does not allow to compute; nevertheless, we have in mind a payoff-based
second step.} The assumption $p_{ij}(F, x) = \lambda_{ij}(F, x) x_j$ just means that the
probability $p_{ij}$ to consider switching to strategy $j$ is zero whenever $x_j=0$, since we are modeling an imitation process.
Our adoption rates $r_{ij}$ will typically be monotonic, in the sense of Eq. \eqref{eq:monotonicity}.
Thus, the difference with monotone imitative dynamics is that the probability with which a revising agent gets interested in strategy $j$ need not be exactly $x_j$; that is, the $\lambda_{ij}$ need not be all constant and equal to $1$.
Here are some examples.
\begin{example}
\label{ex:list}
Meeting several agents and making a list of their strategies: a protocol advantaging rare strategies.
\end{example}
Assume that, in step 1, a revising agent does not meet one but $m$ other agents uniformly at random, where $m$ is a bounded random variable independent of the strategy played by the agent.
He then makes a list of the strategies they play, and selects at random a strategy in this list, as a candidate.
He might then learn more about this strategy's payoff, by talking to the agent he met, by experimenting with this strategy for a short, un-modeled period of time, or by some thought experiment.
He then decides to adopt it or not according to a standard adoption rate $r_{ij}$.
As a concrete example, assume that the revising agent meets one agent playing strategy 1, two playing strategy 2 and two playing strategy 3.
He would then make a list of the strategies met: $\{1, 2, 3\}$, and pick each of them with the same probability, hence with probability $1/3$.\footnote{Picking up a strategy with a probability proportional to the number of agents met playing them (so here probabilities $1/5$, $2/5$, $2/5$) boils down to selecting a candidate uniformly at random, just breaking the selection process in two.
So this would lead to a neutral step 1.
For similar reasons, if $m=1$ or $m=2$, the above process leads to a neutral step 1.
This is why we need $m\geq 3$ with positive probability.} This is similar to protocols generating Smith or Brown-von Neumann-Nash dynamics, except that, instead of having a list of all possible strategies, an agent becomes aware of other possible strategies by meeting agents using them.
Provided that the number $m$ of agents met is equal to 3 or more with positive probability, the above step 1 advantages rare strategies compared to the reference case $p_{ij}(x)= x_j$, in the sense that the lower $x_j$, the higher the multiplicative factor $\lambda_{ij}$ in \eqref{eq:gen2step}. In other words, in proportion to their frequencies,
rare strategies are more often selected at step 1 than frequent strategies.
Another interpretation is as follows. Assume that after deciding which strategy to investigate, the revising agent obtains information about its payoffs by talking to a randomly selected mentor: one of the agents playing this strategy among those he met. Then if Alice plays a rarer strategy than Bob, she is (ex-ante) more likely to serve as a mentor.
\begin{proposition}\label{prop:ex1}
Assume $m \geq 3$ with positive probability.
Then in the first step of \cref{ex:list}, $p_{ij}(x) = x_j \lambda_j(x)$ where the functions $\lambda_j$
satisfy
\[\forall x \in X, \forall (j,k) \in I \times I, x_j < x_k \Rightarrow \lambda_j(x) > \lambda_k(x)\]
\end{proposition}
\begin{proof}
See \cref{app:proofs}.
\end{proof}
We do not need step 1 to be exactly as described above.
Any protocol whose first step is a combination of the above one and a standard one ($p_{ij} =x_j$) would favour rare strategies in a similar sense.
Our results also apply to protocols that cannot be separated in two steps in the sense of Eq. \eqref{eq:gen2step}, but still favour rare strategies. This is discussed in \cref{app:moreprot}.
\begin{example}
\label{ex:maj}
Following the majority: a protocol advantaging frequent strategies.
\end{example}
As in the previous example, assume that a revising agent first meets $m$ other agents, where $m$ is a bounded random variable independent of the strategy played by the agent.
But now, he selects as a candidate the strategy played by the highest number of these agents, if there is only one.
If there are several such strategies, he selects one of these strategies uniformly at random.
Thus, if he meets one agent playing strategy 1, two playing strategy 2 and two playing strategy 3, then with probability 1/2 he selects strategy 2, and with probability 1/2, he selects strategy 3.
This step 1 advantages frequent strategies in the sense that the higher $x_j$, the higher the multiplicative factor $\lambda_{ij}$ (which here is independent of $i$).
In this sense, frequent strategies are imitated more often, or more precisely, more often selected at step 1.
\begin{proposition}\label{prop:ex2}
Assume that $m \geq 3$ with positive probability.
Then in the first step of \cref{ex:maj}, $p_{ij}(x) = x_j \lambda_j(x)$ where the functions $\lambda_j$ satisfy
\[\forall x \in X, \forall (j, k) \in I \times I, x_j < x_k \Rightarrow \lambda_j(x) < \lambda_k(x)\]
\end{proposition}
\begin{proof}
See \cref{app:proofs}.
\end{proof}
As for \cref{ex:list}, a number of variants could be considered that cannot easily be put in the form \eqref{eq:gen2step}, but still favour frequent strategies, and to which our results would apply.
Note also that other forms of conformity biases have been studied in the literature on the evolution of cooperation, and shown to allow for the survival of cooperation in the prisoner's dilemma (Boyd and Richerson, 1988 \cite{3}; see also Eq. (1) in Heinrich and Boyd, 2001 \cite{4}, or in Pe\~na et al., 2009 \cite{8}).
\begin{example}
\label{ex:other}
Trying to meet agents playing other strategies than one's own: a protocol disadvantaging frequent strategies.
\end{example}
Assume that in step $1$, a revising agent of type $i$ meets somebody uniformly at random in the population.
If this person is of a type $j \neq i$, then the revising agent considers switching to $j$.
If this person is also of type $i$, then the revising agent tries again.
If after trying $m$ times, he did not manage to meet an agent of another type, he stops and keeps using strategy $i$.
The maximal number of trials $m$ could be a random variable.
We only assume that the law of this maximal number is the same for all strategies, that it is almost surely finite, and that with positive probability, it is equal to $2$ or more.
The motivation for such a behavior is that an agent currently playing strategy $i$ already knows that this is a possible behavior and already has a pretty good idea of how good this strategy is.
So talking with an agent of the same type is not very informative. Upon meeting an agent of the same type, a revising agent might thus be willing to try to meet somebody else.\footnote{If the payoff of a strategy is not deterministic, talking with other agents playing the same strategy is useful, but likely less so than talking to an agent with a different behaviour.}
For any $j \neq i$, the probability that a revising agent of type $i$ meets an agent playing another strategy for the first time at the $k^{th}$ trial, and that this agent is of type $j$, is $x_i^{k-1} x_j$.
So the probability $p_{ij}$ that a revising agent of type $i$ considers switching to strategy $j$ is:
\[p_{ij} = x_j \lambda (x_i), \text{ with } \lambda(x_i) = 1 + x_i + \dotsm + x_i^{m-1}\]
The function $\lambda$ is strictly increasing. In this sense frequent strategies imitate more often than rare ones (or rather, are proportionally more likely to select another type at step 1). This is because agents from frequent types try on average more times to meet another type than agents from rare types.
This favours rare types but not in the same way as in \cref{ex:list}.
Indeed, the fact that a strategy is rare will not increase its chance to be considered for imitation, in the sense that if $j$ and $k$ are two strategies different from $i$, $p_{ij}/x_j = p_{ik}/x_k = \lambda(x_i)$, irrespective of the relative frequencies of strategies $j$ and $k$.
So $j$ and $k$ have the same ``extra-probability" of being selected by $i$.
In terms of the mother-equation \eqref{eq:mother}, the advantage of rare strategies is a higher inflow in \cref{ex:list} and a lower outflow in \cref{ex:other}.
The first step of \cref{ex:other} may also be interpreted as follows: the revising agent meets $m$ agents, keeps the same strategy if they all play as he does, and otherwise disregards all agents playing his strategy; he then picks up one of the remaining agents uniformly at random, and chooses her strategy as a candidate.
Thus, if he plays strategy 3 and meets one agent playing strategy 1, two playing strategy 2 and two playing strategy 3, he ends up choosing strategy 1 with probability 1/3 and strategy 2 with probability 2/3.
\begin{example}
\label{ex:confirmation}
Confirmation bias: a protocol favouring frequent strategies.
\end{example}
Assume that a revising agent meets $m$ other agents and that its main purpose is to be reassured that his strategy is not completely foolish.
More precisely, if at least one of the agents met plays the same strategy as he does, then he keeps it; otherwise, he selects uniformly at random one of the agents met and envisions to imitate her.
This leads to $$p_{ij} = (1-x_i)^m \frac{x_j}{1-x_i}= (1- x_i)^{m-1} x_j$$ for any $i \neq j$.
Thus, $\lambda_{ij}(x)=
(1- x_i)^{m-1}$. If $m \geq 2$, this expression is strictly decreasing in $x_i$, hence this protocol favours frequent strategies.
This is an example of frequent strategies imitating less often than rare strategies (or rather, being proportionally less likely to select another strategy as a candidate at step 1).
\subsection{A definition of favouring rare or frequent strategies}
Consider a two-step revision protocol of the form \eqref{eq:gen2step}:
\footnote{
Our results go through if all definitions in this section are restricted to the case where $i$ and $j$ are twin strategies, in that they have the same payoff function: $F_i = F_j$. This is because the strategy of the proof is to first use the advantage to rare or frequent strategies in a game with twin strategies, and then penalize one of them to make it dominated.}
\begin{definition} The first step is \emph{fair} is $\lambda_{ij}= 1$ for all $i \neq j$.
\end{definition}
\begin{definition}[being selected more often]
Per capita, rare strategies are more often selected at step 1 than frequent ones if for all $(F, x)$ and all strategies $i, j$ such that $x_i < x_j$, $\lambda_{ji}(F, x) \geq \lambda_{ij}(F,x)$, and $\lambda_{ki}(F,x) \geq \lambda_{kj}(F, x)$ for all strategies $k \notin\{i,j\}$.
They are selected strictly more often if these conditions hold with strict inequalities.
Frequent strategies are selected more often (in a weak or strict sense) if the same conditions hold when $x_i > x_j$.
\end{definition}
\begin{definition}[selecting other strategies less often]
Per capita, rare strategies select other strategies less often if for all $(F, x)$ and all strategies $i, j$ such that $x_i < x_j$, $\lambda_{ij} \leq \lambda_{ji}$ and for all strategies $k \notin\{i,j\}$, $\lambda_{ik} \leq \lambda_{jk}$.
They select other strategies strictly less often if these conditions hold with strict inequalities.
Frequent strategies select other strategies less often (in a weak or strict sense) if the same conditions hold when $x_i > x_j$.
\end{definition}
\begin{definition}[favouring rare or frequent strategies]
\label{def:adv}
Step 1 favours rare strategies if rare strategies are more often selected and select other strategies less often than frequent ones, and at least one of these properties holds strictly.
It favours frequent strategies if frequent strategies are more often selected and select other strategies less often, and at least one of these properties holds strictly.
\end{definition}
With this vocabulary, the protocols of Examples 1 and 3 both favour rare strategies, but not for the same reason.
In \cref{ex:list}, rare strategies are selected strictly more often
than frequent ones, while in \cref{ex:other}, they select other strategies strictly less often.
The protocols of Examples 2 and 4 favour frequent strategies.
| 3,701 | 31,513 |
en
|
train
|
0.172.3
|
\section{A very simple example of survival of dominated strategies}
\label{sec:simple}
In this section, we consider two-step revision protocols \eqref{eq:gen2step} where in the second step, the adoption rates $r_{ij}$ are always positive.
This is the case in the imitation of success, in imitation driven by dissatisfaction, and in any generalization of the form $r_{ij} = f(F_i) g(F_j)$ with $f$ and $g$ positive.\footnote{It would be natural to assume $f$ decreasing, $g$ increasing,
but this is not needed.}
For such protocols, as soon as the first step is not fair, survival of dominated strategies occurs in the simplest of games.
\begin{proposition}
\label{prop:simple} Consider dynamics generated by protocols such that the functions $\lambda_{ij}$ and $r_{ij}$ are jointly continuous in $(F, x)$, the adoption rates $r_{ij}$ are strictly positive, and $r_{ij}(F, x) = r_{ji}(F, x)$ whenever $F_i(x)= F_j(x)$. Consider the $2 \times 2$ game $\Gamma^{\varepsilon}$ with payoff function $F^{\varepsilon} = (F_1^{\varepsilon}, F_2^{\varepsilon})$ given by $F^{\varepsilon}_1(x)= 1$ and $F^{\varepsilon}_2(x)= 1- \varepsilon$, for all $x$ in $X$.
\begin{enumerate}
\item If the first step favours rare strategies, then for any $\alpha > 0$,
there exists $\bar{\varepsilon}>0$ such that, for any $\varepsilon \in [0 , \bar{\varepsilon}]$
and for any initial condition $x(0)$ in $\mathrm{int}(X)$, $\liminf x_2(t) \geq 1/2 - \alpha$ as $t \to +\infty$.
\item If the first step favours frequent strategies, then for any $\alpha > 0$,
there exists $\bar{\varepsilon}>0$ such that, for any $\varepsilon \in [0 , \bar{\varepsilon}]$
and for any initial condition $x(0)$ such that $x_2(0) \geq 1/2 + \alpha$, $x_2(t) \to 1$ as $t \to +\infty$.
\item If there exists $\hat{x} \in \mathrm{int}(X)$ such that $\lambda_{12}(F^0, \hat{x}) > \lambda_{21}(F^0, \hat{x})$, then
there exists $\bar{\varepsilon}>0$ such that, for any $\varepsilon \in [0 , \bar{\varepsilon}]$,
for any initial condition such that $x_2(0) >\hat{x}_2$, $\liminf x_2(t) \geq \hat{x}_2$.
\end{enumerate}
\end{proposition}
\begin{proof}
1) With only two strategies, the mother-equation \eqref{eq:mother} boils down to \[\dot x_1 = x_1(1-x_1) h(F, x) \text{ with } h(F, x)= \lambda_{21} r_{21} - \lambda_{12} r_{12}.\] Our assumptions ensure that $h$ is jointly continuous.
In game $\Gamma^{0}$, $r_{21}= r_{12}$ for all $x$, hence $h(F^0, x)= (\lambda_{21} - \lambda_{12}) r_{12}$.
Since we assume $r_{12}>0$, $h(F^0, x)$ has the sign of $\lambda_{21} - \lambda_{12}$.
Thus, if step 1 favours rare strategies, $h(F^0, x) > 0$ if $0 \leq x_1 < 1/2$ and $h(F^0, x)< 0$ if $1/2 < x_1 \leq 1$.
Thus, in game $\Gamma^0$, $x_1(t) \to 1/2$ as $t \to +\infty$ for any interior initial condition.
Now let $\alpha \in (0, 1/2)$.
Since the sets $[0, 1/2 - \alpha]$ and $[1/2+ \alpha, 1]$ are compact, and $h$ is jointly continuous, it follows that for any $\varepsilon >0$ small enough, in game $\Gamma^{\varepsilon}$, we still have $h(F^{\varepsilon}, x) > 0$ on $[0, 1/2 - \alpha]$ and $h(F^{\varepsilon}, x) < 0$ on $[1/2 + \alpha, 1]$. Therefore, in $\Gamma^{\varepsilon}$, for any interior initial condition,
\[\frac{1}{2} - \alpha \leq \liminf_{t \to + \infty} x(t) \leq \limsup_{t \to + \infty} x(t) \leq \frac{1}{2} + \alpha.\]
2) Similar arguments show that, if step 1 favours frequent strategies, then $x_2(t) \to 1$ for any initial condition such that $x_2(0) >1/2$ in game $\Gamma^0$, and for any initial condition such that $x_2(0) \geq 1/2+ \alpha$ in $\Gamma^{\varepsilon}$, provided that $\varepsilon$ is small enough.
3) The assumption essentially amounts to assuming that the first step is not fair. We may then assume that there exists $\hat{x}$ such that $\lambda_{12}(F^0, \hat{x}) > \lambda_{21}(F^0, \hat{x})$.
Then in $\Gamma^{0}$, $h(F^0, \hat{x}) <0$, hence for any $\varepsilon>0$ small enough, $h(F^{\varepsilon}, \hat{x}) < 0$.
It follows that at $\hat{x}$, $\dot x_2 >0$.
Since the state space is a segment, the result follows.
\end{proof}
\emph{How dominated can surviving strategies be?} As results of Hofbauer and Sandholm, the proof of \cref{prop:simple}
relies on arbitrarily small domination levels. It does not say whether strategies that are substantially dominated can survive. To tackle
this question, consider a game with only two strategies, $1$ and $2$, with constant payoffs: $F_1(x) = u_1$ and $F_2(x) = u_2 < u_1$ for all $x$ in $X$. For
a protocol of type \eqref{eq:gen2step}, there are at least as many transitions from strategy 2 to strategy 1 than from 1 to 2 (hence the
frequency of strategy 2 does not decrease) if and only if $\lambda_{12} r_{12} \geq \lambda_{21} r_{21}$, or equivalently
\begin{equation*}
\frac{r_{21}}{r_{12}} \leq \frac{\lambda_{12}}{\lambda_{21}}
\end{equation*}
The LHS may be seen as the ``payoff effect" and the RHS as the ``frequency effect".
This inequality takes a simple form if we assume
\begin{itemize}
\item $r_{ij} = u_j \geq 0$, as in the imitation of success.
\item $p_{ij}= x_j \lambda(x_i)$ with $\lambda(x_i) = 1 + x_i + \dotsm + x_i^{m-1}$, as in \cref{ex:other} from \cref{sec:ImProc}, where a revising agent tries to meet an agent playing another strategy up to $m$ times before giving up.
\end{itemize}
It is then easy to see that the strictly dominated strategy $2$ survives whenever $u_2 > u_1/m$. Moreover, in that case, $x_2(t) \to x_2^{\ast}$ where $x_2^{\ast}$ is the solution of
\[ u_2/u_1 = \frac{x_2(1- x_2^m)}{x_1 (1-x_1^m)} \text{ with } x_1 = 1-x_2.\]
\cref{FigA} draws the value of the asymptotic frequency $x_2^{\ast}$ of the dominated strategy as a function of the ratio $u_2/u_1$, for various values of $m$. For instance, if $m=2$, the dominated strategy survives if its payoff is at least half the payoff of the dominant strategy $(u_2/u_1 \geq 1/2)$, its asymptotic frequency is larger than 0.2 if $u_2/u_1 \geq 2/3$, and larger than $1/3$ if $u_2/u_1 \geq 0.8$. Larger values of $m$ lead to even larger frequencies of the dominated strategy. Thus, at least for this protocol, relatively large differences in payoffs still allow for survival of strictly dominated strategies at significant frequencies.
\begin{figure}
\caption{\textbf{Asymptotic frequency of the dominated strategy as a function of the payoff ratio $u_2/u_1$ for various values of $m$.}
\label{FigA}
\end{figure}
| 2,197 | 31,513 |
en
|
train
|
0.172.4
|
\section{Imitation through comparison of payoffs}
\label{sec:paycomp}
In imitation protocols considered in the previous section, adoption rates are always positive, and rest-points correspond to an equilibrium between inflow and outflow, rather than an absence of strategy changes. Though these adoption rates are standard, they have the debatable property that revising agents do not compare the payoff of their current strategy to the payoff of the strategy they envision to adopt (or the average payoff in the population). As a result, agents may switch to a strategy with currently lower payoffs than their own (or lower than average).
In this section, we show that survival of dominated strategies also occurs for adoption rates based on payoff comparison, such as $r_{ij} = [F_i - F_j]_+$, $r_{ij}= [F_j - \bar{F}]_+$, or generalizations thereof.\footnote{The examples we give cannot be of simple $2 \times 2$ games, as in the previous section. Indeed, in a game with only two strategies, such adoption rates prevent agents playing the dominant strategy to adopt the dominated one, so the dominated strategy gets extinct. This is also the case for any dynamics satisfying Positive Correlation (defined below).} To do so, we first need to show that, under mild additional assumptions, these protocols lead to dynamics satisfying the version of Positive Correlation for imitation processes:
\begin{equation}
\label{eq:PC}
\tag{PC$'$}
\sum_{i} \dot{x}_i F_i > 0
\end{equation}
whenever $x$ is not a population equilibrium, that is, a population state at which all strategies with a positive frequency get the same payoff (or in other words, a rest point of the replicator dynamics). An interpretation of \eqref{eq:PC} is that, in a fixed environment, the average payoff in the population would increase, unless it is already maximal.\footnote{On top of replacing Nash equilibrium with population equilibrium, condition \eqref{eq:PC} somehow combines the Positive Correlation condition of Hofbauer and Sandholm ($\sum_{i} \dot{x}_i F_i > 0$ whenever $\dot{x} \neq 0$) and their Nash Stationarity condition ($\dot{x} \neq 0$ whenever $x$ is not a Nash equilibrium).}
\subsection{Protocols leading to Positive Correlation}
Define the sign function by, for any real number $a$: $\mathrm{sgn}(a) = 1$ if $a>0$, $\mathrm{sgn}(a) = -1$ if $a <0$, and $\mathrm{sgn}(0)=0$.
\begin{proposition}
\label{prop:PC}
Consider dynamics arising from protocols of type \eqref{eq:gen2step}. Condition \eqref{eq:PC} is satisfied if at least one of the following properties holds:\footnote{The equalities below are between functions: $F_i$, $F_j$ may depend on $x$, and $r_{ij}$, $r_i$, $r_j$, $p_{ij}$, $\lambda_i$, $\lambda_j$ may depend on $(F, x)$.}
\begin{description}
\item[a)] (pairwise comparison) $\mathrm{sgn}(r_{ij}) = \mathrm{sgn}([F_j - F_i]_+)$.
\item[b)] (imitation of greater than average success)\footnote{If $f$ is constant, the second step is purely imitation of greater than average success. If $f$ is decreasing, it combines imitation of greater than average success with imitation driven by dissatisfaction.}\\
$p_{ij} = \lambda_j x_j$ with $\lambda_j$ positive; $r_{ij}= f(F_i) r_j$ with $f$ positive, nonincreasing, and $\mathrm{sgn}(r_j) = \mathrm{sgn}([F_j - \bar{F}]_+)$.
\item[c)] (imitation driven by less than average success)\footnote{If $g$ is constant, the second step is purely imitation driven by less than average success. If $g$ is decreasing, it combines imitation driven by less than average success with imitation of success.}\\
$p_{ij} = \lambda_i x_j $ with $\lambda_i$ positive; $r_{ij}= g(F_j) r_i$ with $g$ positive, nondecreasing, and $\mathrm{sgn}(r_i) = \mathrm{sgn}([\bar{F}- F_i]_+)$.
\end{description}
\end{proposition}
The intuition for this result is as follows: in case a), agents always switch to strategies with better payoff than their own; in case b), agents only switch to strategies $j$ earning more than $\bar{F}$, and for any such $j$, the average former payoff of agents switching to $j$ is no more than $\bar{F}$; in case c), agents only quit strategies $i$ earning less than $\bar{F}$, and for any such strategy $i$, on average, the new strategy of agents quitting $i$ earns at least $\bar{F}$.
It follows that in all three cases, in a fixed environment, the average population payoff would increase, which is one of the interpretations of condition \eqref{eq:PC}.
A formal proof of \cref{prop:PC} is given below.
\begin{proof}
We let the reader check that \[\sum_i \dot{x}_i F_i = \sum_{i, j} x_i \rho_{ij} (F_j - F_i)\] (intuitively, both sides represent the rate at which the average population payoff evolves in a fixed environment).
\vspace*{4pt}\noindent\textbf{Case a).} $\sum_i \dot{x}_i F_i = \sum_{i, j} x_i p_{ij} r_{ij} (F_j - F_i)$ with $\mathrm{sgn}(r_{ij}) = \mathrm{sgn}([F_j - F_i]_+)$, so that $\mathrm{sgn}(r_{ij} (F_j - F_i)) = \mathrm{sgn}([F_j - F_i]_+)$.
It follows that the sum is zero if $F_i=F_j$ for any strategies $i$, $j$ such that $x_i>0$, $x_j>0$ (that is, at a population equilibrium) and positive otherwise.
\vspace*{4pt}\noindent\textbf{Case b).} Let $p_j = \lambda_j x_j$, with $\lambda_j >0$; let $\bar f= \sum_k x_k f(F_k)$ and let $y_i = x_i f(F_i) / \bar{f}$.
Note that $\sum_i y_i=1$.
We have:
\begin{equation*}
\begin{split}
\sum_i \dot{x}_i F_i = \sum_{i, j} x_i f(F_i) \lambda_j x_j r_j (F_j - F_i) & = \bar{f} \sum_{i, j} y_i \lambda_j x_j r_j (F_j - F_i)\\
& = \bar{f} \sum_j \lambda_j x_j r_j (F_j- \sum y_i F_i).
\end{split}
\end{equation*}
Since $f$ is nonincreasing, the $y_i$ (which may be thought of as distorted frequencies) give more weight to strategies with low payoffs than the true frequencies $x_i$, and it may be shown that $\sum y_i F_i \leq \sum x_i F_i = \bar{F}$.
Since $\mathrm{sgn}(r_j) = \mathrm{sgn}([F_j - \bar{F}]_+)$, it follows that we also have $sgn (r_j (F_j- \sum y_i F_i))= \mathrm{sgn}([F_j - \bar{F}]_+)$.
Thus, the whole sum is zero at population equilibria and positive otherwise.
\vspace*{4pt}\noindent\textbf{Case c).} Similarly, let $\bar{g} = \sum_k x_k g(F_k)$ and $y_i = x_i g(F_i) / \bar{g}$.
We get:
\begin{equation*}
\begin{split}
\sum_i \dot{x}_i F_i = \sum_{i, j} \lambda_i x_j r_i g(F_j) (F_j - F_i) & = \bar{g} \sum_{i, j} \lambda_i r_i y_j (F_j - F_i)\\
& = \bar{g} \sum_{i} \lambda_i r_i \left(\left[\sum_j y_j F_j\right] - F_i\right).
\end{split}
\end{equation*}
Since $g$ is nondecreasing, $\sum_j y_j F_j \geq \bar{F}$.
Moreover, $r_i$ has the sign of $[\bar{F} - F_i]_+$.
Therefore, $r_i ([\sum_j y_j F_j] - F_i)$ has the sign of $[\bar{F} - F_i]_+$.
It follows that the whole sum is zero at population equilibria and positive otherwise.
\end{proof}
\subsection{Survival result}
Our results on survival of dominated strategies also hold for revision protocols that are not of the two-step form \eqref{eq:gen2step}.
To emphasize this fact, we first state a theorem with assumptions directly on the vector field $V^F$ and the switching-rates $\rho_{ij}$.
We then provide sufficient conditions for these assumptions to be satisfied by two-step revision protocols of form \eqref{eq:gen2step}.
We begin with a list of definitions and assumptions.
\begin{definition}
Strategies $i$ and $j$ are twins if for all $x$ in $X$, $F_i(x)=F_j(x)$.
\end{definition}
\begin{definition}
At a given population state of a given game: strategy $i$ imitates other strategies if there exists $j \neq i$ such that $\rho_{ij} >0$; it is imitated by other strategies if there exists $j \neq i$ such that $\rho_{ji} >0$.
\end{definition}
On top of condition \eqref{eq:PC}, we will need the following assumptions:
\emph{Continuity (C)}: the vector field $V^F$ is Lipschitz continuous in $x$ and continuous in $u$ (implying joint continuity);
the functions $x \to \rho_{ij}(F, x)$ are continuous in $x$.
\emph{Imitation (Im)}: at any interior population state that is not a Nash equilibrium, each strategy $i$ imitates other strategies or is imitated by other strategies (or both).
We also need either Advantage to Rarity or Advantage to Frequency, as defined below:
\emph{Advantage to Rarity (AR)}: in the interior of the simplex, if strategy $i$ and $j$ are twins, then
$\frac{\dot{x}_i}{x_i} \geq \frac{\dot{x}_j}{x_j}$ whenever $x_i < x_j$. Moreover, at least one of the following additional properties holds: \\
(AR1) The inequality is strict whenever at least one of the strategies $i$ and $j$ imitates other strategies.\\
(AR2) The inequality is strict whenever at least one of the strategies $i$ and $j$ is imitated by other strategies.
\emph{Advantage to Frequency (AF)}: idem but when $x_i > x_j$ instead of $x_i < x_j$.
\begin{theorem}
\label{th:hypno}
Fix $\eta >0$.
Assume that conditions \eqref{eq:PC}, (Im), and (C) are satisfied. If (AR) is satisfied (respectively, (AF)), then there exist 4-strategy games in which pure strategy $3$ strictly dominates pure strategy $4$ but $\liminf x_4(t) > \frac{1}{2} - \eta$ (respectively, $1- \eta$) for a large, open set of initial conditions.
\footnote{By a ``large set", we mean the whole simplex (for an advantage to rarity) or the half-simplex defined by $x_4 \geq x_3$ (for an advantage to frequency), except an arbitrarily small neighborhood of its boundary and of a line segment.}
\end{theorem}
The proof is given in the next section. It is based on ideas of Hofbauer and Sandholm. We first provide sufficient conditions for the assumptions of \cref{th:hypno} to hold. Consider a two-step revision protocol $\rho_{ij}(F, x) = x_j \lambda_{ij}(F, x) r_{ij}(F, x)$.
\begin{definition}
Step 2 treats twins identically if for any twin strategies $i$ and $j$, $r_{ij}= r_{ji}$ and for any $k \notin \{i, j\}$, $r_{ik} = r_{jk}$ and $r_{ki}= r_{kj}$.
\end{definition}
\begin{proposition} Consider dynamics generated by a two-step protocol of form \eqref{eq:gen2step} satisfying the assumptions of \cref{prop:PC}.
Then \cref{th:hypno} applies provided that both of the following conditions hold:\\
a) the functions $\lambda_{ij}$ and $r_{ij}$ are continuous, and Lipschitz continuous in $x$;\\
b) the selection rates $\lambda_{ij}$ are strictly positive, step 1 favours rare (respectively frequent) strategies, and step 2 treats twins identically.
\end{proposition}
\begin{proof}
The conditions of \cref{prop:PC} imply \eqref{eq:PC} and (Im), as would any protocol based on adoption rates $r_{ij}$ with the same sign as $[F_j - F_i]_+$, or $[F_j - \bar{F}]_+$. Assumption a) implies (C). It remains to show that b) implies (AR) (or, respectively, (AF)). Let $i$ and $k$ be twin strategies.
We let the reader check that, since step 2 treats twins identically:
\begin{equation*}
\frac{\dot{x}_i }{x_i} - \frac{\dot{x}_j }{x_j}= \sum_{k \notin \{i, j\}} x_k r_{ki} (\lambda_{ki} - \lambda_{kj}) + \sum_{k \notin \{i, j\}} x_k r_{ik} (\lambda_{jk}- \lambda_{ik}) + r_{ij} (x_j + x_i) [\lambda_{ji} - \lambda_{ij} ]
\end{equation*}
Moreover, again because step 2 treats twins identically, the assumption in (AR1) that at least one of the strategies $i$ and $j$ imitates (or, in (AR2), is imitated by) other strategies boils down to the fact that this holds for strategy $i$.
Now assume that $x_i < x_j$ and that step 1 favours rare strategies. Then all three terms in the RHS are nonnegative. There are two cases.
\vspace*{4pt}\noindent\textbf{Case 1.} If rare strategies are more often selected at step 1. Then $\lambda_{ki} > \lambda_{kj}$ for all $k \notin \{ i, j\}$, and $\lambda_{ji} > \lambda_{ij}$.
Provided that strategy $i$ is imitated by other strategies, it follows that the first or the third term, hence the whole RHS, is positive. Therefore (AR2) holds, hence (AR) holds.
\vspace*{4pt}\noindent\textbf{Case 2.} Otherwise, rare strategies select other strategies less often. The second or third term in the RHS are then positive, provided that strategy $i$ imitates other strategies. Therefore (AR1) holds, hence (AR) holds as well.
Similarly, if step 1 favours frequent strategies, condition (AF) is satisfied. This concludes the proof.
\end{proof}
| 3,968 | 31,513 |
en
|
train
|
0.172.5
|
\section{Proof of \cref{th:hypno}}
\label{sec:proof}
The proof combines ideas of the proofs of Hofbauer and Sandholm's (2011) Theorems 1 and 2. As in their Theorem 2, the game used is the hypnodisk game with a feeble twin. As in their Theorem 1, in the case of an advantage to rarity, the shares of strategies that always earn the same payoff tend to become equal.
\subsection{The game} We first briefly recall the construction of the hypnodisk game with a feeble twin (see also Figures 5, 6, 7 in Hofbauer and Sandholm). The construction has three steps. Below, $X$ may denote the simplex of a game with three or four strategies, depending on the context.
\vspace*{4pt}\noindent\textbf{Step 1. The hypnodisk game.} The hypnodisk game is a 3-strategy game, with nonlinear payoffs: it is not the mixed extension of a finite game. It may be seen as a generalization of Rock-Paper-Scissors, in that it generates cyclic dynamics for any dynamics satisfying Positive Correlation. Its payoff function will be denoted by $H$. We refer to Hofbauer and Sandholm for a precise definition and analysis of this game. The important properties are the following:
a) there is a unique Nash equilibrium $p = (1/3, 1/3, 1/3)$.
b) there exist two reals numbers $r$ and $R$ with $0 < r < R < 1/\sqrt{6}$ such that: within the disk of center $p$ and radius $r$, the payoffs are as in a coordination game: $H_i(x) = x_i$; outside of the disk of center $p$ and radius $R$, the payoffs are as in an anti-coordination game: $H_i(x) = -x_i$. These disks will be denoted by $D_r = \{x \in X, || x - p||_2 < r\}$ and $D_R = \{x \in X, || x - p||_2 \leq R\}$.\footnote{We define $D_r$ as an open disk so that the annular region $D_R \backslash D_r$ is closed.}
c) In the annular region with radii $r$ and $R$, the payoffs are defined in a way that preserves the regularity of the payoff function.
d) The radii $r$ and $R$ may be chosen arbitrarily small if useful.
The payoff function $F$ is a map from $X \subset \mathbb{R}^3$ to $\mathbb{R}^3$ and may be seen as a vector field. Property b) implies that the projection of this payoff vector field on the affine span of the simplex points towards the equilibrium outside of the larger disk $D_R$, and away from the equilibrium within the smaller disk $D_r$ (except precisely at the equilibrium).\footnote{The idea to preserve the regularity of the payoff function, i.e., property c), is to rotate continuously (the projection of) the payoff vector field so that it rotates by 180 degrees in total in the annular region, see Hofbauer and Sandholm.} Moreover, the geometric interpretation of condition \eqref{eq:PC} is that, except at population equilibria, the payoff vector field, or equivalently, its projection on the affine span of the simplex, makes an acute angle with the dynamics' vector field $V^F$. It follows that in the hypnodisk game, for any dynamics satisfying \eqref{eq:PC} and any interior initial condition different from the Nash equilibrium, the solution eventually enters the annulus region with radii $r$ and $R$ and never leaves (Hofbauer and Sandholm, Lemma 3).
A similar construction could be made but putting the unique equilibrium at any desired place in the interior of the simplex instead of the barycenter.\footnote{\label{ft20} The disks $D_r$ and $D_R$ would then surround the equilibrium and the projected payoff vector field would point towards the equilibrium outside of the larger disk $D_R$, and away from it inside of the smaller disk $D_r$. This is the case for instance if $H_i(x) = p_i - x_i$ outside $D_R$ and $H_i(x)=x_i - p_i$ inside $D_r$, where $p$ is the equilibrium.}
\vspace*{4pt}\noindent\textbf{Step 2. Adding a twin.} Let us now add a fourth strategy that is a twin of the third. This leads to a 4-strategy game, which is called the hypnodisk game with a twin. Its payoff function $F$ satisfies: for any $x$ in $X$, $F_i(x) = H_i(x_1, x_2, x_3 + x_4)$ for $i=1,2,3$ and $F_4(x)= F_3(x)$. There is now a segment of Nash equilibria:
$$\mathrm{NE}=\{x \in X, (x_1, x_2, x_3 + x_4) = (1/3, 1/3, 1/3)\}.$$
The disks $D_r$ and $D_R$ become intersections of cylinders and of the simplex, which are denoted by $I$ and $O$ (for Inner and Outer cylinders):
$$I = \{x \in X, (x_1, x_2, x_3 + x_4) \in D_r\}; \quad O = \{x \in X, (x_1, x_2, x_3 + x_4) \in D_R\}.$$
The annular area with radii $r$ and $R$ becomes the intercylinder region
\[D= O\backslash I = \{x \in X, r^2 \leq (x_1- 1/3)^2 + (x_2- 1/3)^2 + (x_3 + x_4 - 1/3)^2 \leq R^2\}.\]
For any dynamics satisfying (C) and \eqref{eq:PC} and any interior initial condition not in $\mathrm{NE}$, the solution eventually enters this intercylinder zone, and then never leaves it (Hofbauer and Sandholm, Lemma 4):
$\exists T, \forall t \geq T, x(t) \in D$.
\vspace*{4pt}\noindent\textbf{Step 3. The feeble twin.} We now subtract $\varepsilon>0$ from the payoffs of strategy $4$, so that it is now dominated by strategy $3$. This leads to the hypnodisk game with a feeble twin, which we denote by $\Gamma_{\varepsilon}$.
\subsection{Sketch of proof of \cref{th:hypno}} Before providing a formal proof, we describe its logic. Consider first the hypnodisk game with an exact twin $\Gamma_0$. In the case of an advantage to rare strategies, the shares of strategy $3$ and $4$ tend to become equal. As a result, for any interior initial condition, solutions converge to an attractor $A$ which is contained in the intersection of the intercylinder region $D$ and the plane $x_3=x_4$. In this attractor, $\liminf x_4 \geq \frac{1}{6}- \frac{R}{\sqrt{6}}$. Because the vector field of the dynamics is jointly continuous in $(F, x)$, for $\varepsilon> 0$ small enough, there is an attractor $A^{\varepsilon}$ included in an arbitrarily small neighborhood of $A$, and whose basin of attraction is at least the old basin of attraction minus an arbitrarily small neighborhood of the union of the segment of $\mathrm{NE}$ and of the boundary of the simplex. It follows that for most initial conditions, $\liminf x_4 \geq \frac{1}{6}- \frac{R}{\sqrt{6}} - \delta(\varepsilon)$, with $\delta(\varepsilon) \to 0$ as $\varepsilon \to 0$.
Thus, if we fix any $\eta >0$, for $R$ and $\varepsilon$ small enough, $\liminf x_4 \geq \frac{1}{6}- \eta$. We can get an ever larger value of $\liminf x_4$ with the same construction and proof, just replacing the standard hypnodisk game by a variant with unique equilibrium $(\beta, \beta, 1- 2\beta)$, see footnote \ref{ft20}.
We then get for $\beta$, $R$ and $\varepsilon$ small enough, $\liminf x_4 \geq \frac{1}{2}- \eta$.\footnote{We thank Vianney Perchet for pointing this out to us.}
The case of an advantage to frequent strategies is similar, with some twists. Now in $\Gamma_0$, for any interior initial condition with $x_4 > x_3$, the solution converges to an attractor $A'$ included in the intersection of the intercylinder region $D$ and of the plane $x_3=0$. In $\Gamma_{\varepsilon}$, for $\varepsilon$ small enough, there is an attractor included in an arbitrarily small neighborhood of $A'$, and whose basin of attraction is at least the basin of attraction of $A'$ minus a zone with an arbitrarily small Lebesgue measure. This allows to show that, for any $\eta>0$, we may find a game such that for many initial conditions (including all initial conditions such that $x_4 > x_3 + \eta$ and $x$ is not in the $\eta$-neighborhood of the union of the segment of Nash equilibrium and of the boundary of the simplex), for $\varepsilon$ and $R$ small enough, $\liminf x_4 \geq 1/3 - \eta$.
By changing the equilibrium of the initial hypnodisk game, we get $\liminf x_4 \geq 1 - \eta$.
| 2,316 | 31,513 |
en
|
train
|
0.172.6
|
\subsection{Sketch of proof of \cref{th:hypno}} Before providing a formal proof, we describe its logic. Consider first the hypnodisk game with an exact twin $\Gamma_0$. In the case of an advantage to rare strategies, the shares of strategy $3$ and $4$ tend to become equal. As a result, for any interior initial condition, solutions converge to an attractor $A$ which is contained in the intersection of the intercylinder region $D$ and the plane $x_3=x_4$. In this attractor, $\liminf x_4 \geq \frac{1}{6}- \frac{R}{\sqrt{6}}$. Because the vector field of the dynamics is jointly continuous in $(F, x)$, for $\varepsilon> 0$ small enough, there is an attractor $A^{\varepsilon}$ included in an arbitrarily small neighborhood of $A$, and whose basin of attraction is at least the old basin of attraction minus an arbitrarily small neighborhood of the union of the segment of $\mathrm{NE}$ and of the boundary of the simplex. It follows that for most initial conditions, $\liminf x_4 \geq \frac{1}{6}- \frac{R}{\sqrt{6}} - \delta(\varepsilon)$, with $\delta(\varepsilon) \to 0$ as $\varepsilon \to 0$.
Thus, if we fix any $\eta >0$, for $R$ and $\varepsilon$ small enough, $\liminf x_4 \geq \frac{1}{6}- \eta$. We can get an ever larger value of $\liminf x_4$ with the same construction and proof, just replacing the standard hypnodisk game by a variant with unique equilibrium $(\beta, \beta, 1- 2\beta)$, see footnote \ref{ft20}.
We then get for $\beta$, $R$ and $\varepsilon$ small enough, $\liminf x_4 \geq \frac{1}{2}- \eta$.\footnote{We thank Vianney Perchet for pointing this out to us.}
The case of an advantage to frequent strategies is similar, with some twists. Now in $\Gamma_0$, for any interior initial condition with $x_4 > x_3$, the solution converges to an attractor $A'$ included in the intersection of the intercylinder region $D$ and of the plane $x_3=0$. In $\Gamma_{\varepsilon}$, for $\varepsilon$ small enough, there is an attractor included in an arbitrarily small neighborhood of $A'$, and whose basin of attraction is at least the basin of attraction of $A'$ minus a zone with an arbitrarily small Lebesgue measure. This allows to show that, for any $\eta>0$, we may find a game such that for many initial conditions (including all initial conditions such that $x_4 > x_3 + \eta$ and $x$ is not in the $\eta$-neighborhood of the union of the segment of Nash equilibrium and of the boundary of the simplex), for $\varepsilon$ and $R$ small enough, $\liminf x_4 \geq 1/3 - \eta$.
By changing the equilibrium of the initial hypnodisk game, we get $\liminf x_4 \geq 1 - \eta$.
\subsection{Formal proof of \cref{th:hypno}}
We now provide a formal proof. To fix ideas, let us assume that (AR) holds, and that the advantage to rarity is strict when at least one of the twin strategies
imitate other strategies (condition (AR1)). Other cases are similar. Consider game $\Gamma_0$ and fix an interior initial condition $x(0) \in \mathrm{NE}$.
As in Hofbauer and Sandholm, Lemma 4, we first obtain:
\begin{claim}
\label{cl:IC}
There exists a time $T$ such that for all $t \geq T$, $x(t)$ is in the intercylinder region $D$.
\end{claim}
\begin{proof} Since Hofbauer and Sandholm do not provide a formal proof, we do it here. Due to condition (PC'), the vector field $V^F(x)$ at the boundary of region $D$ points inwards, it follows that once solutions enter region $D$, they cannot leave it. By contradiction, assume that this is never the case, that is, the solution remains in the compact set $K = X\backslash \mathrm{int}(D)$, where $\mathrm{int}(D)$ denotes the relative interior of $D$. It follows that the solution has accumulation points in $K$, which cannot be on $\mathrm{NE} \cup \mathrm{\mathrm{Bd}}(X)$. Moreover, the Euclidean distance $W(x)$ to the segment of Nash equilibria evolves monotonically (it increases within inner cylinder $I$ and decreases outside outer cylinder $O$). By a standard result on Lyapunov functions, all such accumulation points $x^{\ast}$ satisfy $\nabla W(x^*) \cdot F(x^*)=0$ (thus, if at time $t$, $x(t) = x^*$, then $dW(x(t))/dt=0$). But by construction, there are no such points in $K \backslash (\mathrm{NE} \cup \mathrm{Bd}(X))$, a contradiction.
\end{proof}
Moreover, as in Theorem 1 of Hofbauer and Sandholm:
\begin{claim}
\label{cl:equal}
$x_4(t)/x_3(t) \to 1$ as $t \to+\infty$.
\end{claim}
\begin{proof}
Let $V(x) = x_4/x_3$ and let $\dot V(x) = \nabla V(x) \cdot F(x)$ so that $\mathrm{\frac{d}{dt}} V(x(t)) = \dot{V}(x(t))$.
Due to condition (AR), $V(x(t))$ evolves (weakly) monotonically in the direction of $1$. Thus, assuming to fix ideas $x_4(0) < x_3(0)$, $V(x(t))$ is increasing and less than $1$, hence has a limit $l$ such that $V(x(0)) \leq l \leq 1$. Assume by contradiction that $l < 1$.
Let $K_i= \{x \in X \, | \, \rho_{ik}=0, \forall k \neq i\}$ be the set of population states at which strategy $i$ does not imitate any other strategy. Let
$$K = K_3 \cap K_4 \cap D \cap \{x \in X, x_4 = l x_3\}.$$
Note that $K$ is compact (by Continuity) and contained in the interior of the simplex (since in $D$, $x_1>0$, $x_2>0$, $x_3+ x_4>0$, and $l \neq 0$). We want to show that the solution cannot stay in $K$ forever. For any population state in $K$, strategies $3$ and $4$ do not imitate other strategies. Moreover, the state is not an equilibrium. So by Imitation, strategies $3$ and $4$ are imitated. Therefore, $\dot{x}_3 + \dot{x}_4 > 0$.
By Continuity and compactness of $K$, there exists $\varepsilon >0$ and an open neighborhood $U$ of $K$ such that, whenever $x(t) \in U \cap X$, $\dot{x_3}+ \dot{x_4} > \varepsilon$.
It follows that $x(t)$ cannot stay for ever in $U$, hence must have accumulation points in $X \backslash K$.
We now prove that this is impossible. Indeed, let $x^{\ast} \in X \backslash K$ be an accumulation point of $x(t)$.
Necessarily, $x^{\ast} \in D \cap \{x \in X \, | \, x_4 = l x_3\} \subset \mathrm{int}(X)$.
Moreover, by standard results on Lyapunov functions, $\dot{V}(x^{\ast})=0$. Since $x^{\ast} \in \mathrm{int}(X)$, it follows from (AR1) that $x^{\ast} \in K_3 \cap K_4$, so that $x^{\ast} \in K$. We thus get a contradiction.
This concludes the proof.
\end{proof}
Let $K_{\alpha}$ denote the compact set $X \backslash N_\alpha(\mathrm{NE} \cup \mathrm{Bd}(X))$, where $N_{\alpha}$ refers to the open $\alpha$-neighborhood for the Euclidean norm. Let $\varepsilon \in (0, 1)$ and let $$U_{\varepsilon} = \{x \in N_{\varepsilon}(D), |x_4/x_3 - 1| < \varepsilon\}.$$
Let $\Phi_t$ denote the time $t$ map of the flow; that is, $\Phi_t(x_0)$ is the value at time $t$ of the solution such that $x(0) = x_0$.
\begin{claim}
\label{cl:flow}
There exists $T$ such that for all $t \geq T$, $\Phi_t(K_{\alpha}) \subset U_{\varepsilon}$.
\end{claim}
\begin{proof} Since the solution cannot leave $U_{\varepsilon}$ it suffices to show that there exists $T$ such that $\Phi_T(K_{\alpha}) \subset U_{\varepsilon}$.
Assume that this is not the case.
Then we may find a increasing sequence of times $t_n \to +\infty$ and a sequence of positions $x_n \in K_{\alpha}$ such that $\Phi_{t_n}(x_n) \notin U_{\varepsilon}$.
By compactness of $K_{\alpha}$, up to considering a subsequence, we may assume that $x_n$ converges towards some $x_{\lim}$ in $K_{\alpha}$.
But by the previous claims, there exists a time $\tau$ such that $\Phi_{\tau} (x_{\lim}) \in U_{\varepsilon/2}$.
By continuity of the flow, there exists a neighborhood $\Omega$ of $x_{\lim}$ such that $\Phi_{\tau}(\Omega) \subset U_{\varepsilon}$, hence $\Phi_{t}(\Omega) \subset U_{\varepsilon}$ for all $t \geq \tau$, since solutions cannot leave $U_{\varepsilon}$ in forward time. But for $n$ large enough, $t_n \geq \tau$, $x_n \in \Omega$ but $\phi_{t_n}(x_n) \notin U_{\varepsilon}$, a contradiction.\end{proof}
We now need to define $\omega$-limits, attractors and basins of attraction.
\begin{definition}[$\omega$-limit]
The \emph{$\omega$-limit} of a set $U \subset X$ is defined as $\omega(U) = \bigcap_{t > 0} \mathrm{cl}(\phi^{ [t, \infty) } (U))$,
where for $T \subset \mathbb{R}$,
we let $\phi^T(U) = \cup_{t \in T} \phi^t (U)$. If $x \in X$, we write $\omega(x)$ instead of $\omega(\{x\})$.
\end{definition}
\begin{definition}[attractor and basin of attraction]
A set $A \subset X$ is an \emph{attractor} if there is a neighborhood $U$ of $A$ such that $\omega(U) = A$. Its \emph{basin of attraction} is then defined as $B(A) = \{x : \omega(x) \subseteq A\}$.
\end{definition}
\begin{claim}
\label{cl:attractor}
Fix $\alpha>0$ small enough.
Then $A= \omega(K_{\alpha})$ is an attractor, it is included in the intersection of the intercylinder zone D and the plane $x_3 = x_4$, and its basin of attraction is $B(A)= \mathrm{int}(X)\backslash \mathrm{NE}$.
\end{claim}
\begin{proof} By \cref{cl:flow}, there exists a time $t >0$ such that $\phi_t(K_{\alpha}) \subset \mathrm{int}(K_{\alpha})$.
It follows (see Appendix A in Hofbauer and Sandholm) that $A$ is an attractor.
By letting $\varepsilon$ go to zero in \cref{cl:flow}, we obtain that
$$A \subset \cap_{\varepsilon >0} U_{\varepsilon} = U_0 = D \cap \{x \in X : x_3 = x_4\}.$$
Finally, by \cref{cl:IC,cl:equal}, for all $x$ in $\mathrm{int}(X)\backslash \mathrm{NE}$, the solution starting in $x$ enters $K_{\alpha}$.
Therefore $\omega(x) \subset \omega(K_{\alpha}) =A$, hence $(\mathrm{int}(X))\backslash \mathrm{NE} \subset B(A)$.
The reverse inclusion is obvious.
Note that $\omega(K_{\alpha})$ does not depend on $\alpha$ (as long as $\alpha$ is small enough).
\end{proof}
\begin{claim} Call $\Gamma_{\varepsilon}$ the hypnodisk game with an $\varepsilon$-feeble twin. Let $\eta> 0$. For all $\varepsilon>0$ small enough, in $\Gamma_{\varepsilon}$, there is an attractor $A_{\varepsilon} \subset N_{\eta}(A)$ whose basin of attraction includes $B(A) \backslash N_{\eta} (\mathrm{NE} \cup \mathrm{Bd}(X)) = X\backslash N_{\eta} (\mathrm{NE} \cup \mathrm{Bd}(X))$.
\end{claim}
\begin{proof} This follows from \cref{cl:attractor} and Continuity, as in Hofbauer and Sandholm (2011) \cite{5}.
\end{proof}
We now conclude: for $\varepsilon$ small enough, from most initial conditions, solutions converge to an attractor along which $x_4$ is bounded away from zero. The minimum of $x_4$ along this attractor may be made higher than $1/6- R/\sqrt{6} - \eta$, where $R$ is the radius of the outer cylinder, which may be chosen arbitrarily small.
By taking as base game an hypnodisk game with an equilibrium such that $x_3$ is sufficiently close to $1$ (see footnote \ref{ft20}), we may transform $1/6$ in any number strictly smaller than $1/2$, and obtain $\liminf x_4 \geq 1/2 - \delta$ for any $\delta>0$ fixed beforehand.\footnote{For an advantage to frequent strategies, we get initially $\liminf x_4 \geq 1/3- R - \eta$ and then $\liminf x_4 \geq 1- \delta$.}
| 3,482 | 31,513 |
en
|
train
|
0.172.7
|
\section{Discussion}
\label{sec:disc}
\emph{The hypnodisk game. }
The hypnodisk game with a feeble twin is easy to analyze, and allows to prove survival results for large classes of dynamics. However, numerical simulations show that pure strategies strictly dominated by other pure strategies also survive in more standard games. \cref{FigC} illustrates imitation dynamics in a Rock-Paper-Scissors-Feeble Twin game for two different domination margins (the game is the same as in the numerical explorations of Hofbauer and Sandholm, Section 5.2):
\begin{equation}
\label{eq:RPST}
\begin{array}{c}
R \\ P \\ S \\ FT
\end{array}
\left(\begin{array}{cccc}
0 & -2 & 1 & 1 \\
1 & 0 & -2 & -2 \\
-2 & 1 & 0 & 0 \\
-2 - d & 1 - d & - d & -d
\end{array}\right)
\end{equation}
The dynamics are derived from a two-step protocol of form \eqref{eq:gen2step}, with a first step as in \cref{ex:other}
(trying to meet an agent playing another strategy),
with $m= 4$, and a second step based on payoff comparison: $r_{ij} = [F_j - F_i]_+$.
\begin{figure}
\caption{\textbf{Imitation dynamics in Game \eqref{eq:RPST}
\label{FigC}
\end{figure}
\emph{Monotone dynamics. }
Monotone dynamics (or imitative dynamics, in the sense of Sandholm) have long been known to eliminate pure strategies strictly dominated by other pure strategies. With our vocabulary, this may be formulated as follows: in a two-step protocol of form \eqref{eq:gen2step}, if Step 1 is fair ($p_{ij} = x_j$) and Step 2 is monotonic (in the sense of Eq.\eqref{eq:monotonicity}), then pure strategies strictly dominated by other pure strategies go extinct. Obviously, if step 1 is fair but step 2 is not monotonic, there is no reason to expect dominated strategies to go extinct. What we showed is that, similarly, when step 2 is monotonic, but step 1 is not fair, dominated strategies may survive.
\emph{Elimination results are not robust. }
For imitative dynamics, the elimination of strictly dominated pure strategies in all games relies on the fact that two strategies with the same payoff have the same per capita growth rate. This condition is an equality, and contrary to strict inequalities, equalities are not robust to small perturbations. In a sense, Hofbauer and Sandholm show that the elimination result is not robust to the introduction of the possibility to innovate. We show that it is not robust either to perturbations of the imitation protocol (here, perturbations of the first step), even if the dynamics still model pure imitation. See also Section 5.3. in Hofbauer and Sandholm.
\emph{Inflow towards a dominated strategy. }
At all times, some of the agents quit playing the dominated strategy for the dominating one, or some currently even better strategy. So for the dominated strategy to survive, it is needed that, to compensate, some other strategies keep imitating it.
This can occur in two ways:
\begin{enumerate}
\item
If solutions converge to a rest-point, but there is nonetheless a perpetual flow between strategies. That is, rest-points correspond to a macroscopic equilibrium between inflow and outflow, not an absence of strategy changes at the micro level (\cref{sec:simple}). This is not the case for protocols based on standard payoff comparison.
\item
If solutions do not converge to a rest-point. This requires cycling dynamics. This is why survival examples in \cref{sec:paycomp} are more elaborated than the perhaps surprisingly simple examples of \cref{sec:simple}. Simpler examples of survival of dominated strategies under imitation dynamics based on payoff comparison may be given if we consider a population of players playing against an opponent with an exogeneously cycling behavior: see \cref{app:unilateral}.
\end{enumerate}
\emph{From the replicator dynamics to the Smith dynamics. }
Consider again the protocol of \cref{ex:list} (making a list of strategies met), with a second step based on the proportional pairwise comparison rule,
$r_{ij} = [F_j - F_i]_+$. This revision protocol builds a bridge between the replicator dynamics and the Smith dynamics:
replicator dynamics are obtained for $m=1$ and the Smith dynamics (in the interior of the simplex) in the
limit $m \to +\infty$. This suggests that at least for this protocol and small values of $m$, survival of dominated
strategies will be more modest than with the Smith dynamics (lower domination level allowed, lower share of
the dominated strategy for a given domination level). This is what our preliminary numerical investigations also
suggest. A systematic investigation of these issues is left for future research.
\emph{Favouring frequent strategies. }
On the other hand, imitation protocols favouring frequent strategies allow for survival of dominated strategies at very high frequencies, much higher that with the Smith dynamics or other standard innovative dynamics.
Conceptually, an advantage to frequent strategies could be given in innovative dynamics (i.e., such that strategies initially not played may appear), by assuming a form of risk-aversion of agents who would only be willing to adopt rare or unused strategies if the payoff of these rare strategies seem substantially higher than the payoff of better known strategies. For a risk-averse agent, this can be a rational attitude if information on the payoff of other strategies is noisy, with a greater variance for rare strategies, on which less information is available.
Note also that there is a certain degree of similarity between modifying a fair imitation protocol into one that benefits frequent strategies and adding to the payoffs of the game those of a pure coordination game.\footnote{In both cases, assume we start with twin strategies in the base game (before adding the coordination component), and most of the population playing the second strategy, and then add an increasingly high bonus to the first strategy, making the second one dominated. Initially, agents keep playing the second strategy due to either the advantage to frequent strategies or the added coordination component, but when the bonus becomes large enough, they switch to the first strategy. If the bonus for the first strategy is then reduced, and even made slightly negative, agents will keep playing the first strategy \textendash\ a hysteresis effect.}
\numberwithin{lemma}{section}
\numberwithin{corollary}{section}
\numberwithin{proposition}{section}
\numberwithin{equation}{section}
\appendix
| 1,580 | 31,513 |
en
|
train
|
0.172.8
|
\section{Proofs of propositions on advantage to rare or frequent strategies}
\label{app:proofs}
In this section, the probability that a revising agent selects strategy $j$ at step 1 is independent of the revising agent's strategy, so we denote it by $p_j$ instead of $p_{ij}$.
\subsection{Meeting $m $ agents: Proof of \cref{prop:ex1}}
\begin{claim}
\label{cl:Bayes} It suffices to show that when $m$ is deterministic, then the first step is fair ($p_i=x_i$ for all $i$) for $m=1$ or $m=2$, and advantages rare strategies for any $m \geq 3$.
\end{claim}
This is a simple computation, which is left to the reader.
\begin{claim} The first step is fair for $m=1$ or $m=2$\end{claim}
\begin{proof} This is obvious for $m=1$.
For $m=2$, this is because the selection steps boils down to selecting an agent uniformly at random, just breaking down the process in two stages: first select two agents uniformly at random, then
among these two, select one of them, again uniformly.
\end{proof}
\begin{claim} For any fixed $m \geq 3$, the first step advantages rare strategies.
\end{claim}
\begin{proof} We divide the proof in four steps.
\vspace*{4pt}\noindent\textbf{Step 1.}
Fix $m \geq 3$.
Let $0 \leq q \leq l \leq m$.
Let $E_{l, q}$ denote the event: among the $m$ agents met, $l$ play other strategies than $i$ or $j$ (so $\tilde{m}=m-l$ play $i$ or $j$) and these $l$ agents play $q$ different strategies.\footnote{Example: if $m= 5$, $i=1$, $j=4$, and the agents drawn are: one of type 1, two of type 2, two of type 3, then $l=4$ and $q=2$.} Then
\[\frac{p_i(x)}{x_i}= \sum_{ (q,l): 0 \leq q \leq l \leq m} P(E_{l,q}) \frac{P(i | E_{l, q})}{x_i}\]
\vspace*{4pt}\noindent\textbf{Step 2.} Now let $y_i=\frac{x_i}{x_i + x_j}$ and $y_j= 1-y_i$.
Condition on the event $E_{l,q}$.
If $l=m$, that is, if all $m$ agents met play strategies other than $i$ or $j$, then $P(i | E_{l,q})=0$.
Otherwise, each of the $\tilde{m}=m-l$ players playing $i$ or $j$ is of type $i$ with probability $y$ and the draws are independent.
So:
a) with probability $y_i^{\tilde{m}}$, all of these $\tilde{m}$ players are of type $i$; so there are exactly $q+1$ strategies encountered, including $i$ but excluding $j$.
Thus, $i$ is selected with probability $1/(q+1)$, and $j$ with probability $0$.
b) symmetrically, with probability $y_j^{\tilde{m}}$, all of the $\tilde{m}$ players are of type $j$, hence $i$ is selected with probability $0$ and $j$ with probability $1/(q+1)$
c) finally, with the remaining probability $1- y_i^{\tilde{m}} - y_j^{\tilde{m}}$, there are both players of type $i$ and players of type $j$ among these $\tilde{m}$ players, and both strategies are selected with probability $1/(q+2)$.
Summing up, if $l < m$, then
\begin{equation}
\label{eq:Elq}
P(i | E_{l, q})= \frac{1}{q+1} y_i^{\tilde{m}} + \frac{1}{q+2} \left(1- y_i^{\tilde{m}} - y_j^{\tilde{m}} \right)
\end{equation}
\vspace*{4pt}\noindent\textbf{Step 3.}
Assume $m \geq 3$, $l \leq m-2$ (so $\tilde{m} \geq 2$), and $0 < x_i < x_j$.
Then \[\frac{P(i | E_{l,q})}{x_i} > \frac{P(j | E_{l, q})}{x_j}.\]
Let $A_i= (q+1)(q+2)P(i | E_{l,q}) / y_i$ and define $A_j$ similarly.
It suffices to show that $A_i > A_j$.
By \eqref{eq:Elq}:
\[y_i A_i= (q+2)y_i^{\tilde{m}} + (q+1) (1- y_i^{\tilde{m}} - y_j^{\tilde{m}} )= y_i^{\tilde{m}} + (q+1) (1- y_j^{\tilde{m}})\]
Noting that $\displaystyle 1- y_j^{\tilde{m}}= (1-y_j) \sum_{r=0}^{\tilde{m}-1} y_j^r = y_i \sum_{r=0}^{\tilde{m}-1} y_j^r $ and dividing by $y_i$ we obtain:
\[A_i= y_i^{\tilde{m}-1} + (q+1) \sum_{r=0}^{\tilde{m}-1} y_j^r= y_i^{\tilde{m}-1} + (q+1) y_j^{\tilde{m}- 1} + \sum_{r=0}^{\tilde{m}-2} y_j^r \]
and similarly for $A_j$.
It follows that $A_i - A_j = T_1 + T_2$ with \[T_1= q (y_j^{\tilde{m}-1} - y_i^{\tilde{m}- 1}) \mbox{ and } T_2 = \sum_{r=0}^{\tilde{m}-2} (y_j^r - y_i^r).\] The term $T_1$ is always nonnegative and it is positive if $q \geq 1$, that is if $l \geq 1$.
This is the case in particular if $l=m-2$ since $m \geq 3$.
The term $T_2$ is always nonnegative, and it is positive if $\tilde{m} \geq 3$, that is if $l \leq m-3$.
Since we assumed $l \leq m-2$, at least one of the terms $T_1$ and $T_2$ is positive.
Therefore, $T_1+ T_2 > 0$ and $A_i > A_j$.
\vspace*{4pt}\noindent\textbf{Step 4.}
Assume $m \geq 3$ and $0 < x_i < x_j$.
Then $p_i/x_i > p_j/x_j$.
Indeed, it is easily seen that if $l=m$ or $l=m-1$, then $P(i | E_{l, q})/x_i= P(j | E_{l, q})/x_j$ (equal to $0$ if $l=m$, and to $1/ [(x_i + x_j)(q+1)]$ if $l=m-1$).
Moreover, we just saw that if $l \leq m-2$, which happens with positive probability, then $P(i | E_{l, q})/x_i > P(j | E_{l, q})/x_j$.
Since
\[\frac{p_i}{x_i} = \sum_{0 \leq q \leq l \leq m} P(E_{l, q}) \frac{P(i | E_{l, q})}{x_i}.
\]
the result follows.
\end{proof}
| 1,974 | 31,513 |
en
|
train
|
0.172.9
|
\subsection{The majoritarian choice: Proof of \cref{prop:ex2}}
It suffices to show that if $m$ is deterministic, then step 1 is fair for $m=1$ or $m=2$, and advantages frequent strategies for any $m \geq 3$.
The proof that step 1 is fair for $m=1$ or $m=2$ is as in \cref{prop:ex1}.
We now prove that for $m \geq 3$, the first step favours frequent strategies.
Assume $x_i > x_j >0$ and let $y_i= x_i/(x_i + x_j)$ and $y_j =1-y_i$.
Consider a revising agent meeting $m\geq 3$ other agents.
\vspace*{4pt}\noindent\textbf{Case 1.} Conditionally to the fact that only agents playing strategies $i$ and $j$ are met
(in a slight abuse of notation, we keep writing $p_i$ for the probability that $i$ is selected, without making clear in the notation that this is conditional on the fact that only agents playing $i$ or $j$ are met).
\vspace*{4pt}\noindent\textbf{Subcase 1.1
(m odd, $m \geq 3$).}
If $m= 2m'+1$, the probability that $i$ is selected is:
\[\frac{p_i}{y_i} = \frac{1}{y_i} \sum_{k= m'+1}^{m} \left(\begin{array}{c} m' \\ k \end{array}\right) y_i^k y_j^{m-k}= \sum_{k= m'+1}^{m} \left(\begin{array}{c} m \\ k \end{array}\right) y_i^{k-1} y_j^{m-k}\]
Similarly,
\[\frac{p_j}{y_j} = \sum_{k= m'+1}^{m} \left(\begin{array}{c} m \\ k \end{array}\right) y_j^{k-1} y_i^{m-k}\]
Since for any $k \geq m'+1$, we have $k-1 \geq m' \geq m - (m'+1) \geq m-k$, it follows that the first expression is term by term greater than the second one, and strictly greater for all terms $k > m'+1$.
Such terms exists because $m= 2m'+1 \geq 3$ implies $m > m'+1$.
It follows that $p_i/y_i > p_j/y_j$.
\vspace*{4pt}\noindent\textbf{Subcase 1.2 (m even, $m\geq 4$).} If $m=2m'$, then there may be a tie, if both strategies are met $m'$ times, in which case they are selected with probability $1/2$.
Thus we get:
\begin{equation}
\label{eq:app1prot2} \frac{p_i}{y_i} = \frac{1}{2} \left(\begin{array}{c} m \\ m' \end{array}\right) y_i^{m'-1} y_j^{m'}
+ \sum_{k= m'+1}^{m} \left(\begin{array}{c} m \\ k \end{array}\right) y_i^{k-1} y_j^{m-k}
\end{equation}
Note that if $k \geq m'+1$, then
\[y_i^{k-1}y_j^{m-k} \geq y_i^{m'} y_j^{m- (m'+1)}= y_i^{m'} y_j^{m'-1}.\]
Moreover, the inequality is strict for any $k \geq m'+2$, in particular for $k=m$, since we assumed $m=2m' \geq 4$.
Thus, factorizing by $y_i^{m'-1}y_j^{m'-1}$, we obtain:
\[\frac{p_i}{y_i} > y_i^{m'-1}y_j^{m'-1} \left[ \frac{1}{2} \left(\begin{array}{c} m \\ m' \end{array}\right) y_j
+ \sum_{k= m'+1}^{m} \left(\begin{array}{c} m \\ k \end{array}\right) y_i \right] \]
A similar (but reverse) inequality holds for $p_j/y_j$.
Using both inequalities, we obtain:
\[\frac{p_i}{y_i} - \frac{p_j}{y_j} > y_i^{m'-1}y_j^{m'-1} (y_i - y_j) \left[\sum_{k= m'+1}^{m} \left(\begin{array}{c} m \\ k \end{array}\right) - \frac{1}{2} \left(\begin{array}{c} m \\ m' \end{array}\right) \right]\]
We let the reader check that the first term in the summation suffices to show that the bracket is nonnegative, so that $p_i/y_i > p_j/y_j$.
\vspace*{4pt}\noindent\textbf{Case 2.} Now consider the general case.
Out of the $m$ players met, let $m_k$ denote the number of players playing strategy $k$.
Let $E(l, b, q)$ denote the event: out of the $m$ players met, $l= \sum_{k \notin \{i, j\}} m_k$ play strategies different from $i$ and $j$, $b= \max_{k \notin \{i,j\}} m_k$ is the highest number of occurence of a strategy different from $i$ and $j$, and there are $q$ strategies $k \notin \{i, j\}$ such that $m_k=b$. Condition on this event.
Again, we write $p_i$ instead of $P(i |E(l, b, q))$.
We dealt with the case $l=0$ in Case 1, so we may assume $l \geq 1$ hence $b \geq 1$ and $q \geq 1$.
Let $\tilde{m} = m - l$ be the number of agents met playing $i$ or $j$.
\noindent\textbf{Subcase 2.1. $b > \tilde{m}$.} Then $i$ and $j$ cannot be selected, hence $p_i=p_j=0$.
\noindent\textbf{Subcase 2.2. $\tilde{m} \geq 2b+1$.} Then one of the strategies $i$ and $j$ will win for sure.
Moreover, $\tilde{m} \geq 3$, and the proof is as in Case 1, replacing $m$ with $\tilde{m}$.
\vspace*{4pt}\noindent\textbf{{Subcase 2.3. $\tilde{m}= 2b$.}} This is similar to Subcase 1.2., replacing $m$ with $\tilde{m}$, with the twist that if $m_i=m_j=b$, the strategies $i$ and $j$ are not selected with probability $1/2$, but $1/(q+2)$. The factor $1/2$ in Eq. \eqref{eq:app1prot2} thus becomes $1/(q+2)$. Since $q \geq 1$, it is then easy to check that $p_i/y_i > p_j/y_j$ even if $\tilde{m}=2$ (while we had to require $m \geq 4$ in Subcase 1.2).
\vspace*{4pt}\noindent\textbf{{Subcase 2.4. $b \leq \tilde{m} \leq 2b - 1$.}} This case is similar to Subcase 1.1.
We get:
\[\frac{p_i}{y_i} = \frac{1}{q+1} \left(\begin{array}{c} \tilde{m} \\ b \end{array}\right) y_i^{b-1} y_j^{\tilde{m} - b}
+ \sum_{k= b+1}^{\tilde{m}} \left(\begin{array}{c} \tilde{m} \\ k \end{array}\right) y_i^{k-1} y_j^{\tilde{m}-k}\]
and a symmetric expression for $p_j/y_j$.
Because $\tilde{m} \leq 2b - 1 \Rightarrow b-1 \geq \tilde{m} - b$,
it follows that the expression for $p_i/y_i$ is term by term greater than the expression for $p_j/y_j$, with a strict inequality for the term $k=\tilde{m}$, unless $\tilde{m}=1$.
It follows that if $\tilde{m}=1$, $p_i/y_i= p_j/y_j$, and if $\tilde{m} > 1$, then $p_i/y_i > p_j/y_j$.
\emph{To conclude:} for any $l$, $b$, $q$, $P(i | E(l,b,q))/y_i \geq P(j | E(l, b, q))/y_j$, with a strict inequality in some cases occurring with positive probability.
Since $$p_i/y_i= \sum_{l, b, q} P(E(l, b, q)) P(i | E(l,b,q))/y_i,$$
it follows that $p_i/y_i > p_j/y_j$.
| 2,337 | 31,513 |
en
|
train
|
0.172.10
|
\section{Imitation protocols not of the form \eqref{eq:gen2step}}
\label{app:moreprot}
We note here that our results would also apply to protocols that cannot be neatly separated in two steps in the sense of Eq. \eqref{eq:gen2step}. Reconsider \cref{ex:list} from \cref{sec:ImProc}, where a revising agent meets several other agents and makes a list of the strategies they play. We assumed then that he would investigate just one of these strategies. Instead, the revising agent could obtain information on the payoffs of all those strategies.
This makes sense if getting information on payoffs of strategies met is cheap.
In our concrete example, after meeting strategies $1$, $2$, $3$, the revising agent would obtain information on the payoffs $F_1$, $F_2$, $F_3$, and adopt one of these strategies with a probability that depends on all these payoffs, and possibly his own.
For instance, he could adopt strategy $j \in \{1, 2, 3\}$ with probability $f(F_j)/ (1 + \sum_{k=1, 2, 3} f(F_k))$ with $f$ positive increasing, or with probability $[F_j - F_i]_+/(1 + \sum_{k=1, 2, 3} [F_k - F_i]_+)$.
Such protocols cannot easily be put in the form \eqref{eq:gen2step}.
Nevertheless, the resulting dynamics still favour rare strategies in the sense that when two strategies have the same payoff, the rarest one has a higher per-capita growth rate; thus, as long as the switching rates $\rho_{ij}$ are regular enough in $(F, x)$, versions of our results would apply. However, our results do not apply to \emph{discontinuous} imitative variants of the best-reply dynamics, such as imitating a best-reply to the current population state among the strategies met.
\section{Unilateral approach: Simple examples for comparison based imitation processes}
\label{app:unilateral}
In this section, we adopt a unilateral approach, in the spirit of (Viossat, 2015 \cite{11}).
That is, we study the evolution of behavior in a large population of players (the focal population, player 1) facing an unknown opponent (the environment, player 2), whose behavior we freely choose.
This allows to provide simple examples of survival of dominated strategies even for dynamics based on payoff comparison.
Specifically, let us denote by $G_{\varepsilon}$ a $3 \times 2$ game where the payoffs in the focal population are as follows:
\begin{equation}
\label{eq:GameUni}
\begin{array}{cc}
& \begin{array}{cc}
L \hspace{0.2 cm} & \hspace{0.2 cm} R \\
\end{array} \\
\begin{array}{c}
1 \\
2 \\
3 \\
\end{array}
& \left(\begin{array}{cc}
1 & 0 \\
0 & 1 \\
-\varepsilon & 1- \varepsilon \\
\end{array}\right)\\
\end{array}
\end{equation}
As before, $x_i(t)$ denotes the frequency of strategy $i \in \{1, 2,3 \}$ in the focal population.
We make the following assumptions:
(A1) For $i=1, 2, 3$, when the opponent plays $Y \in \{L, R\}$, then $$\dot x_i = x_i g^Y_i(x)$$ for some growth-rate function $g_i^Y : X \to \mathbb{R}$ that is Lipschitz continuous in $x$
and depends continuously on the parameter $\varepsilon$ (here, $X$ denotes the simplex of possible population states for the focal population).
(A2) When $\varepsilon=0$, if $x_1 \notin \{0, 1\}$,
then $g_1^L(x) >0$ and $g_1^R(x) < 0$.\\
We also assume that at least one of the conditions (A3), (A3') below holds:
(A3) When $\varepsilon=0$, if $x_3 < x_2$, then $g_3^L (x) \geq g_2^L(x)$ and $g_3^R (x) > g_2^R(x)$\\
or
(A3') When $\varepsilon=0$, if $x_3 < x_2$, then $g_3^L (x) > g_2^L(x)$ and $g_3^R (x) \geq g_2^R(x)$\\
Assumption (A1) is a regularity assumption.
Assumption (A2) is weaker than Positive Correlation.
Assumption (A3) or (A3')
is a form of advantage to rare strategies.
These assumptions are satisfied, for instance, by any dynamics arising from a revision
protocol of form \eqref{eq:gen2step} with $\lambda_{ij}$, $r_{ij}$ Lipschitz continuous in $x$ and continuous in $F$,
$r_{ij}$ with the sign of $[F_j - F_i]_+$,
and favouring rare strategies in the sense of \cref{def:adv}.
\begin{proposition}
\label{prop:app3}
Fix $\eta > 0$.
Let $\delta$, $x_{\min}$, $x_{\max}$ be real numbers such that $0 < \delta < x_{\min} < x_{\max} < 1- \delta$.
Let $K_{\delta}= \{x \in X | \min(x_1, 1-x_1) \geq \delta\}$.
Assume that the opponent plays $L$ until the first time $\tau$
such that $x_1(\tau) \geq x_{\max}$, then plays $R$ for $t > \tau$ until $x_1 = x_{\min}$, then plays $L$ again until $x_1= x_{\max}$, etc.\footnote{The fact that the opponent plays a
discontinuous strategy simplifies the exposition but could be replaced by a similar behavior with smooth transitions. Due to this discontinuity, the frequencies $x_i(t)$ are only piecewise $C^1$,
but it may be shown that this creates no technical difficulty.}
Then there exists $\bar{\varepsilon}>0$ such that for any $\varepsilon \in [0, \bar{\varepsilon}]$ and any initial condition
$x(0) \in K_{\delta} \cap \mathrm{int}(X)$, $\liminf x_3(t) > (1- x_{\max}) \left(\frac{1}{2} - \eta \right)$.
\end{proposition}
\begin{proof}
The intuition is that when $\varepsilon=0$, the shares of strategies 2 and 3 tend to become equal. Thus, $\liminf x_3(t) = (1-\limsup x_1)/2= (1-x_{\max})/2$.
We then need to show that for a sufficiently small perturbation of payoffs, $\liminf x_3$ remains close to $(1-x_{\max})/2$. By contrast with \cref{th:hypno},
we do not deal with an autonomous system of differential equations, but with a controlled system. This is why the proof below does not rely on continuity of attractors but on a direct analysis.
To fix ideas, assume that $(A3)$ holds. The proof when $(A3')$ holds is similar. Throughout, we assume that $x(0) \in K_{\delta} \cap \mathrm{int}(X)$.
By (A1), (A2) and compactness of $K_{\delta}$, there exist positive real numbers $\bar{\varepsilon}$, $\alpha_1$, $\alpha_2$
such that, for any $\varepsilon$ in $[0, \bar{\varepsilon}]$ and any $x \in K_{\delta}$, $\alpha_1 \leq \dot{x}_1 \leq \alpha_2$ when the
opponent plays $L$ and $- \alpha_2 \leq \dot{x}_1 \leq - \alpha_1$ when she plays $R$.
It follows that $x(t)$ eventually
enters the compact set $$K = \{x \in X, x_{\min} \leq x_1 \leq x_{\max}\},$$ and never leaves, oscillating between $x_{\min}$ and $x_{\max}$.
Moreover, the time to travel from the hyperplane $x_1 = x_{\min}$ to the hyperplane $x_1= x_{\max}$ (or back) is always
between $$T_{\min}= \frac{x_{\max}- x_{\min}}{\alpha_2} \text{ and } T_{\max}= \frac{x_{\max} - x_{\min}}{\alpha_1}.$$
Note that $\liminf(x_2 + x_3) = 1 - x_{\max}$.
Thus if suffices to show that, possibly up to lowering $\bar{\varepsilon}$,
$$\liminf \frac{x_3}{x_2 + x_3} \geq \frac{1}{2} - \eta.$$ We first show that $\limsup \frac{x_3}{x_2 + x_3} \geq \frac{1-\eta}{2}$.
Assume by contradiction that this is not the case.
Then from some time $T$ on, $$x(t) \in \tilde{K} = \left\{ x \in K, \frac{x_3}{x_2 + x_3} \leq \frac{1-\eta}{2} \right\}.$$
By (A1), (A3) and compactness of $\tilde{K}$, and up to lowering $\bar{\varepsilon}$, we may assume that there exist positive real
numbers $\beta_1$ and $\beta_2(\varepsilon)$ such that for any $x \in \tilde{K}$ and any $\varepsilon \in [0, \bar{\varepsilon}]$,
\begin{equation}
\label{eq:compgrowth}
g_3^R(x) - g_2^R(x) \geq \beta_1 \text{ and } g_3^L(x) - g_2^L(x) \geq - \beta_2(\varepsilon)
\end{equation}
with $\beta_1$ independent of $\varepsilon$ and $\beta_2(\varepsilon) \to 0$ as $ \varepsilon \to 0$.
Up to lowering $\bar{\varepsilon}$ again, we may
assume that $$C:= \beta_1 T_{\min} - \beta_2(\varepsilon) T_{\max} >0.$$ Now let $t_{2k}$ and $t_{2k+1}$ be the $k^{th}$ time greater
than $T$ such that $x_1 = x_{\min}$ and $x_1 = x_{\max}$, respectively.
Note that
$\frac{d}{dt} \ln(x_3/x_2) = g_3^Y (x) - g_2^Y(x)$ when the opponent plays $Y$.
Integrating between $t_{2k}$ and $t_{2k+2}$ and using
\eqref{eq:compgrowth} we obtain that between $t_{2k}$ and $t_{2k+2}$, $\ln(x_3/x_2)$ increases by at least $C$.
Since $C>0$,
this implies that $x_3/x_2 \to +\infty$, a contradiction.
Therefore,
$$\limsup_{t \to +\infty} \frac{x_3}{x_2 + x_3}(t) \geq \frac{1-\eta}{2}.$$
Moreover, since $\beta_2(\varepsilon) \to 0$ as $\varepsilon \to 0$, up to lowering $\bar{\varepsilon}$ again, we may assume that between $t_{2k}$ and $t_{2k+1}$,
$x_2/(x_2+ x_3)$ does not decrease by more than $\eta/2$.
It may be shown that this ensures that
$\liminf \frac{x_2}{x_2 + x_3} \geq \frac{1-\eta}{2} - \frac{\eta}{2} = \frac{1}{2} - \eta$.
This concludes the proof.
\end{proof}
Note that for $x_{\max}$ and $\eta$ small enough, $\liminf x_3$ may be made arbitrarily close from $1/2$.
If we replace Assumptions (A3), (A3') by the same assumptions but when $x_3 < x_2$, thus giving an advantage to frequent strategies, then we obtain that for $\varepsilon$ small enough and an open set of initial conditions, $\liminf x_3$ may be made arbitrarily close to $1$.
\begin{figure}
\caption{\textbf{Imitation dynamics favouring rare strategies in Game \eqref{eq:GameUni}
\label{FigB}
\end{figure}
\cref{FigB} depicts imitation dynamics with payoffs in the focal population described by the payoff matrix \eqref{eq:GameUni} and a periodic behavior of Player 2 that smoothly approximates playing $L$ on time-intervals of the form $[2k, 2k+1)$ and $R$ on time-intervals of the form $[2k+1, 2k+2)$, where $k$ is an integer (at time $t$, Player 2 puts probability $y(t) = \frac{1+ \sin^{1/9} (\pi t)}{2}$ on strategy L). As in \cref{FigC}, the dynamics of the focal population are derived from a two-step protocol of form \eqref{eq:gen2step}, with a first step as in \cref{ex:other}
(trying to meet an agent playing another strategy),
with $m= 4$, and a second step based on payoff comparison $r_{ij} = [F_j - F_i]_+$. \cref{FigB} illustrates that survival of dominated strategies can also occur if the behavior of the opponent is smooth and independent of the current population state in the focal population. The average frequency of the dominated strategy is around $20\%$ with a domination margin of $\varepsilon = 0.05$, and around $10\%$ with a domination margin of $\varepsilon = 0.1$.
For an advantage to frequent strategies, survival of the dominated strategy in this example seems less robust: if the behavior of the opponent oscillates in a way that is independent of the population state in the focal population, what happens in most simulations is that initially either strategy 1 or strategy 3 takes over, as deviations from an approximately equal share of these strategies get amplified by the advantage to frequent strategies. In the first case, the solution converges to the mixed strategy putting probability 1 on the first strategy. In the second case, strategy 1 gets extinct, and then, since the second step of the protocol is based on payoff comparison, strategy 2 drives strategy 3 extinct.
\section*{Acknowledgments}
\begingroup
\small
The first author is grateful for financial support by
the French National Research Agency (ANR) in the framework of
the ``Investissements d'avenir'' program (ANR-15-IDEX-02),
the LabEx PERSYVAL (ANR-11-LABX-0025-01),
MIAI@Grenoble Alpes (ANR-19-P3IA-0003),
and the bilateral ANR-NRF grant ALIAS (ANR-19-CE48-0018-01).
\endgroup
\end{document}
| 3,702 | 31,513 |
en
|
train
|
0.173.0
|
\begin{document}
\title[Generalized coinvariant algebras for wreath products]
{Generalized coinvariant algebras for wreath products}
\author{Kin Tung Jonathan Chan}
\address
{Department of Mathematics \newline \indent
University of California, San Diego \newline \indent
La Jolla, CA, 92093-0112, USA}
\email{[email protected], [email protected]}
\author{Brendon Rhoades}
\begin{abstract}
Let $r$ be a positive integer and
let $G_n$ be the reflection group of $n \times n$ monomial matrices whose
entries are $r^{th}$ complex roots of unity and let $k \leq n$. We define and study
two new graded
quotients $R_{n,k}$ and $S_{n,k}$ of the polynomial ring ${\mathbb {C}}[x_1, \dots, x_n]$
in $n$ variables. When $k = n$, both of these quotients coincide with the classical coinvariant
algebra attached to $G_n$.
The algebraic properties of our quotients are governed by the combinatorial properties of
$k$-dimensional faces in the Coxeter complex attached to $G_n$ (in the case of $R_{n,k}$)
and $r$-colored ordered set partitions of $\{1, 2, \dots, n\}$ with $k$ blocks
(in the case of $S_{n,k}$).
Our work generalizes a construction of Haglund, Rhoades, and Shimozono from
the symmetric group ${\mathfrak{S}}_n$ to the more general wreath products $G_n$.
\end{abstract}
\keywords{Coxeter complex, coinvariant algebra, wreath product}
\subjclass{Primary 05E18, Secondary 05E05}
\maketitle
| 485 | 83,708 |
en
|
train
|
0.173.1
|
\section{Introduction}
\label{Introduction}
The coinvariant algebra of the symmetric group ${\mathfrak{S}}_n$ is among the most important
${\mathfrak{S}}_n$-modules in combinatorics. It is a graded version of the regular representation of
${\mathfrak{S}}_n$, has structural properties deeply tied to the combinatorics of permutations,
and gives a combinatorially
accessible model for the action of ${\mathfrak{S}}_n$ on the cohomology ring $H^{\bullet}(G/B)$
of the flag manifold $G/B$.
Haglund, Rhoades, and Shimozono \cite{HRS} recently defined a generalization
of the ${\mathfrak{S}}_n$-coinvariant algebra which depends on an integer parameter $k \leq n$.
The structure of their graded ${\mathfrak{S}}_n$-module is governed by the combinatorics of
ordered set partitions of $[n] := \{1, 2, \dots, n \}$ with $k$ blocks.
The graded Frobenius images of this module is (up to a minor twist) either of the combinatorial
expressions ${\mathrm {Rise}}_{n,k}({\mathbf {x}};q,t)$ or
${\mathrm {Val}}_{n,k}({\mathbf {x}};q,t)$ appearing in the {\em Delta Conjecture} of Haglund, Remmel, and Wilson \cite{HRW}
upon setting $t = 0$. The Delta Conjecture is a generalization
of the Shuffle Conjecture in the field of
Macdonald polynomials; this gives the first example of a `naturally constructed' module
with Frobenius image related to
the Delta Conjecture.
A linear transformation $t \in GL_n({\mathbb {C}})$ is a {\em reflection} if the fixed space of $t$ has codimension $1$
in ${\mathbb {C}}^n$ and $t$ has finite order. A finite subgroup $W \subseteq GL_n({\mathbb {C}})$ is called a
{\em reflection group} if $W$ is generated by reflections.
Given any complex reflection group $W$, there is a coinvariant algebra $R_W$ attached to $W$.
The algebra $R_W$ is a graded $W$-module with structural properties closely related to the combinatorics
of $W$. In this paper we provide a Haglund-Rhoades-Shimozono style generalization of $R_W$
in the case where $R_W$ belongs to the family of reflection groups $G(r,1,n) = {\mathbb {Z}}_r \wr {\mathfrak{S}}_n$.
The general linear group $GL_n({\mathbb {C}})$ acts on the polynomial ring
${\mathbb {C}}[{\mathbf {x}}_n] := {\mathbb {C}}[x_1, \dots, x_n]$ by linear substitutions.
If $W \subset GL_n({\mathbb {C}})$ is any finite subgroup,
let
\begin{equation*}
{\mathbb {C}}[{\mathbf {x}}_n]^W := \{ f({\mathbf {x}}_n) \in {\mathbb {C}}[{\mathbf {x}}_n] \,:\, w.f({\mathbf {x}}_n) = f({\mathbf {x}}_n) \text{ for all $w \in W$} \}
\end{equation*}
denote the associated subspace of {\em invariant polynomials} and
let ${\mathbb {C}}[{\mathbf {x}}_n]^W_+ \subset {\mathbb {C}}[{\mathbf {x}}_n]^W$ denote
the collection of invariant polynomials with vanishing constant term.
The {\em invariant ideal} $I_W \subset {\mathbb {C}}[{\mathbf {x}}_n]$ is
the ideal $I_W := \langle {\mathbb {C}}[{\mathbf {x}}_n]^W_+ \rangle$ generated by ${\mathbb {C}}[{\mathbf {x}}_n]^W_+$ and the
{\em coinvariant algebra} is $R_W := {\mathbb {C}}[{\mathbf {x}}_n]/I_W$.
The quotient $R_W$ is a graded $W$-module.
A celebrated result of Chevalley \cite{C} states that if $W$ is a complex reflection group,
then $R_W$ is isomorphic to the regular representation ${\mathbb {C}}[W]$ as a $W$-module.
\begin{quote}
{\bf Notation.} {\em Throughout this paper $r$ will denote a positive integer. Unless otherwise stated, we
assume $r \geq 2$. Let $\zeta := e^{\frac{2 \pi i}{r}} \in {\mathbb {C}}$ and let $G := \langle \zeta \rangle$ be the
multiplicative
group of $r^{th}$ roots of unity in ${\mathbb {C}}^{\times}$.}
\end{quote}
Let us introduce the family of reflection groups we will focus on.
A matrix is {\em monomial } if it has a unique nonzero entry in every row and column.
Let $G_n$ be the group of $n \times n$ monomial matrices whose nonzero entries lie in $G$.
For example, if $r = 3$ we have
\begin{equation*}
g =
\begin{pmatrix}
0 & 0 & \zeta & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & \zeta^2 \\
\zeta & 0 & 0 & 0
\end{pmatrix} \in G_4.
\end{equation*}
Matrices in $G_n$ may be thought of combinatorially as {\em $r$-colored permutations}
$\pi_1^{c_1} \dots \pi_n^{c_n}$, where $\pi_1 \dots \pi_n$ is a permutation in ${\mathfrak{S}}_n$ and
$c_1 \dots c_n$ is a sequence of `colors' in the set $\{0, 1, \dots, r-1\}$ representing powers of $\zeta$.
For example, the above element of $G_4$ may be represented combinatorially as
$g = 4^1 2^0 1^1 3^2$.
In the usual classification of complex reflection groups we have $G_n = G(r,1,n)$. The group
$G_n$ is isomorphic to the wreath product ${\mathbb {Z}}_r \wr {\mathfrak{S}}_n = ({\mathbb {Z}}_r \times \cdots \times {\mathbb {Z}}_r) \rtimes {\mathfrak{S}}_n$,
where the symmetric group ${\mathfrak{S}}_n$ acts on the $n$-fold direct product of cyclic groups
${\mathbb {Z}}_r \times \cdots \times {\mathbb {Z}}_r$ by coordinate permutation.
For the sake of legibility, we suppress reference to $r$ in our notation for $G_n$ and related objects.
Let $I_n \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ be the invariant ideal associated to $G_n$. We have
$I_n = \langle e_1({\mathbf {x}}_n^r), \dots, e_n({\mathbf {x}}_n^r) \rangle$, where
\begin{equation*}
e_d({\mathbf {x}}_n^r) = e_d(x_1^r, \dots, x_n^r) := \sum_{1 \leq i_1 < \cdots < i_d \leq n} x_{i_1}^r \cdots x_{i_d}^r
\end{equation*}
is the $d^{th}$ elementary symmetric function in the variable powers $x_1^r, \dots, x_n^r$.
Let $R_n := {\mathbb {C}}[{\mathbf {x}}_n]/I_n$ denote the coinvariant ring attached to $G_n$.
The algebraic properties of the quotient $R_n$ are governed by the combinatorial properties of
$r$-colored permutations in $G_n$.
Chevalley's result \cite{C} implies that $R_n \cong {\mathbb {C}}[G_n]$ as ungraded $G_n$-modules.
The fact that $e_1({\mathbf {x}}_n^r), \dots, e_n({\mathbf {x}}_n^r)$ is a regular sequence in ${\mathbb {C}}[{\mathbf {x}}_n]$ gives the
following expression for the Hilbert series of $R_n$:
\begin{equation}
{\mathrm {Hilb}}(R_n; q) = \prod_{i = 1}^n \frac{1-q^{ri}}{1-q} = \sum_{g \in G_n} q^{{\mathrm {maj}}(g)},
\end{equation}
where ${\mathrm {maj}}$ is the {\em major index} statistic on $G_n$
(also known as the {\em flag-major index}; see \cite{HLR}).
Bango and Biagoli \cite{BB} described a {\em descent monomial basis}
$\{b_g \,:\, g \in G_n\}$ of $R_n$ whose elements satisfy $\deg(b_g) = {\mathrm {maj}}(b_g)$.
Stembridge \cite[Thm. 6.6]{Stembridge} described the graded $G_n$-module structure of $R_n$
using (the $r \geq 1$ generalization of) standard Young tableaux.
When $r = 1$ and $G_n = {\mathfrak{S}}_n$ is the symmetric group, Haglund, Rhoades, and Shimozono
\cite[Defn. 1.1]{HRS} introduced and studied a generalization of the coinvariant algebra $R_n$ depending
on a positive integer $k \leq n$.
In this paper we extend \cite[Defn. 1.1]{HRS} to $r \geq 2$ by introducing the following {\em two} families of ideals
$I_{n,k}, J_{n,k} \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$.
\begin{defn}
\label{main-definition}
Let $n, k,$ and $r$ be nonnegative integers which satisfy $n \geq k, n \geq 1$, and $r \geq 2$.
We define
two quotients of the polynomial ring ${\mathbb {C}}[{\mathbf {x}}_n]$ as follows.
\begin{enumerate}
\item Let $I_{n,k} \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ be the ideal
\begin{equation*}
I_{n,k} := \langle x_1^{kr+1}, x_2^{kr+1}, \dots, x_n^{kr+1}, e_n({\mathbf {x}}_n^r), e_{n-1}({\mathbf {x}}_n^r), \dots, e_{n-k+1}({\mathbf {x}}_n^r) \rangle
\end{equation*}
and let $R_{n,k}$ be the corresponding quotient:
\begin{equation*}
R_{n,k} := {\mathbb {C}}[{\mathbf {x}}_n]/I_{n,k}.
\end{equation*}
\item Let $J_{n,k} \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ be the ideal
\begin{equation*}
J_{n,k} := \langle x_1^{kr}, x_2^{kr}, \dots, x_n^{kr}, e_n({\mathbf {x}}_n^r), e_{n-1}({\mathbf {x}}_n^r), \dots, e_{n-k+1}({\mathbf {x}}_n^r) \rangle
\end{equation*}
and let $S_{n,k}$ be the corresponding quotient:
\begin{equation*}
S_{n,k} := {\mathbb {C}}[{\mathbf {x}}_n]/J_{n,k}.
\end{equation*}
\end{enumerate}
\end{defn}
Both of the ideals $I_{n,k}$ and $J_{n,k}$ are homogeneous and stable under the action of
$G_n$ on ${\mathbb {C}}[{\mathbf {x}}_n]$. It follows that the quotients $R_{n,k}$ and $S_{n,k}$ are graded
$G_n$-modules. The ring introduced in \cite[Defn. 1.1]{HRS} is the ideal $S_{n,k}$ with $r = 1$.
When $k = n$, it can be shown
\footnote{By \cite[Sec. 7.2]{Bergeron} under the change of variables $(x_1, \dots, x_n) \mapsto (x_1^r, \dots, x_n^r)$
we have $x_n^{nr} \in I_n$, and the ideal $I_n$ is stable under ${\mathfrak{S}}_n$.}
that for any $1 \leq i \leq n$, the variable power $x_i^{nr}$ lies in the invariant ideal
$I_n$, so that $I_{n,n} = J_{n,n} = I_n$, and $R_{n,n} = S_{n,n}$ are both equal to the classical
coinvariant algebra $R_n$ for $G_n$.
At the other extreme, we have $R_{n,0} \cong {\mathbb {C}}$ (the trivial representation in degree $0$)
and $S_{n,0} = 0$.
The reader may wonder why we are presenting two generalizations of the ring of \cite{HRS} rather than one.
The combinatorial reason for this is the presence of {\em zero blocks} in the $G_n$-analog of ordered
set partitions. These zero blocks do not appear in the case of \cite{HRS} when $r = 1$
(or in the case of the classical coinvariant algebra when $k = n$).
Roughly speaking, the ring $S_{n,k}$ will be a `zero block free' version of $R_{n,k}$.
These rings will be related in a nice way (see Proposition~\ref{r-to-s-reduction}), and
$S_{n,k}$ will be easier to analyze directly.
The generators of the ideal $I_{n,k}$ defining the quotient $R_{n,k}$ come in two flavors:
\begin{itemize}
\item high degree invariant polynomials $e_n({\mathbf {x}}_n^r), e_{n-1}({\mathbf {x}}_n^r), \dots, e_{n-k+1}({\mathbf {x}}_n^r)$, and
\item a collection of polynomials $x_1^{kr+1}, \dots, x_n^{kr + 1}$ whose linear span
$\mathrm{span} \{x_1^{kr+1}, \dots, x_n^{kr+1} \}$ is stable under the action of $G_n$ and carries the
dual of the defining action of $G_n$ on ${\mathbb {C}}^n$.
\end{itemize}
This extends the two flavors of generators for the ideal of \cite{HRS}. In the context of the 0-Hecke
algebra $H_n(0)$ attached to the symmetric group, Huang and Rhoades \cite{HuangRhoades}
defined another ideal (denoted in \cite{HuangRhoades} by $J_{n,k} \subseteq \mathbb{F}[{\mathbf {x}}_n]$,
where $\mathbb{F}$ is any field) with analogous types of generators: high degree $H_n(0)$-invariants
together with a copy of the defining representation of $H_n(0)$ sitting in homogeneous degree $k$.
It would be interesting to see if the favorable properties of the corresponding quotients
could be derived from this choice of generator selection in a more conceptual way.
| 3,811 | 83,708 |
en
|
train
|
0.173.2
|
\begin{defn}
\label{main-definition}
Let $n, k,$ and $r$ be nonnegative integers which satisfy $n \geq k, n \geq 1$, and $r \geq 2$.
We define
two quotients of the polynomial ring ${\mathbb {C}}[{\mathbf {x}}_n]$ as follows.
\begin{enumerate}
\item Let $I_{n,k} \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ be the ideal
\begin{equation*}
I_{n,k} := \langle x_1^{kr+1}, x_2^{kr+1}, \dots, x_n^{kr+1}, e_n({\mathbf {x}}_n^r), e_{n-1}({\mathbf {x}}_n^r), \dots, e_{n-k+1}({\mathbf {x}}_n^r) \rangle
\end{equation*}
and let $R_{n,k}$ be the corresponding quotient:
\begin{equation*}
R_{n,k} := {\mathbb {C}}[{\mathbf {x}}_n]/I_{n,k}.
\end{equation*}
\item Let $J_{n,k} \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ be the ideal
\begin{equation*}
J_{n,k} := \langle x_1^{kr}, x_2^{kr}, \dots, x_n^{kr}, e_n({\mathbf {x}}_n^r), e_{n-1}({\mathbf {x}}_n^r), \dots, e_{n-k+1}({\mathbf {x}}_n^r) \rangle
\end{equation*}
and let $S_{n,k}$ be the corresponding quotient:
\begin{equation*}
S_{n,k} := {\mathbb {C}}[{\mathbf {x}}_n]/J_{n,k}.
\end{equation*}
\end{enumerate}
\end{defn}
Both of the ideals $I_{n,k}$ and $J_{n,k}$ are homogeneous and stable under the action of
$G_n$ on ${\mathbb {C}}[{\mathbf {x}}_n]$. It follows that the quotients $R_{n,k}$ and $S_{n,k}$ are graded
$G_n$-modules. The ring introduced in \cite[Defn. 1.1]{HRS} is the ideal $S_{n,k}$ with $r = 1$.
When $k = n$, it can be shown
\footnote{By \cite[Sec. 7.2]{Bergeron} under the change of variables $(x_1, \dots, x_n) \mapsto (x_1^r, \dots, x_n^r)$
we have $x_n^{nr} \in I_n$, and the ideal $I_n$ is stable under ${\mathfrak{S}}_n$.}
that for any $1 \leq i \leq n$, the variable power $x_i^{nr}$ lies in the invariant ideal
$I_n$, so that $I_{n,n} = J_{n,n} = I_n$, and $R_{n,n} = S_{n,n}$ are both equal to the classical
coinvariant algebra $R_n$ for $G_n$.
At the other extreme, we have $R_{n,0} \cong {\mathbb {C}}$ (the trivial representation in degree $0$)
and $S_{n,0} = 0$.
The reader may wonder why we are presenting two generalizations of the ring of \cite{HRS} rather than one.
The combinatorial reason for this is the presence of {\em zero blocks} in the $G_n$-analog of ordered
set partitions. These zero blocks do not appear in the case of \cite{HRS} when $r = 1$
(or in the case of the classical coinvariant algebra when $k = n$).
Roughly speaking, the ring $S_{n,k}$ will be a `zero block free' version of $R_{n,k}$.
These rings will be related in a nice way (see Proposition~\ref{r-to-s-reduction}), and
$S_{n,k}$ will be easier to analyze directly.
The generators of the ideal $I_{n,k}$ defining the quotient $R_{n,k}$ come in two flavors:
\begin{itemize}
\item high degree invariant polynomials $e_n({\mathbf {x}}_n^r), e_{n-1}({\mathbf {x}}_n^r), \dots, e_{n-k+1}({\mathbf {x}}_n^r)$, and
\item a collection of polynomials $x_1^{kr+1}, \dots, x_n^{kr + 1}$ whose linear span
$\mathrm{span} \{x_1^{kr+1}, \dots, x_n^{kr+1} \}$ is stable under the action of $G_n$ and carries the
dual of the defining action of $G_n$ on ${\mathbb {C}}^n$.
\end{itemize}
This extends the two flavors of generators for the ideal of \cite{HRS}. In the context of the 0-Hecke
algebra $H_n(0)$ attached to the symmetric group, Huang and Rhoades \cite{HuangRhoades}
defined another ideal (denoted in \cite{HuangRhoades} by $J_{n,k} \subseteq \mathbb{F}[{\mathbf {x}}_n]$,
where $\mathbb{F}$ is any field) with analogous types of generators: high degree $H_n(0)$-invariants
together with a copy of the defining representation of $H_n(0)$ sitting in homogeneous degree $k$.
It would be interesting to see if the favorable properties of the corresponding quotients
could be derived from this choice of generator selection in a more conceptual way.
In this paper we will prove that the structures of the rings
$R_{n,k}$ and $S_{n,k}$ are controlled by $G_n$-generalizations of ordered set
partitions.
We will use the usual $q$-analog notation
\begin{align*}
[n]_q := 1 + q + \cdots + q^{n-1} & &[n]!_q := [n]_q [n-1]_q \cdots [1]_q \\
{n \brack a_1, \dots , a_r}_q := \frac{[n]!_q}{[a_1]!_q \cdots [a_r]!_q}
& &{n \brack a}_q := \frac{[n]!_q}{[a]!_q [n-a]!_q}.
\end{align*}
We also let ${\mathrm {rev}}_q$ be the operator which reverses the coefficient sequences in polynomials in the
variable $q$ (over any ground ring).
For example, we have
\begin{equation*}
{\mathrm {rev}}_q(8q^2 + 7q + 6) = 6q^2 + 7q + 8.
\end{equation*}
Let ${\mathrm {Stir}}(n,k)$ be the (signless) Stirling number of the second kind counting set partitions of $[n]$ into $k$ blocks
and let ${\mathrm {Stir}}_q(n,k)$ denote the {\em $q$-Stirling number}
defined by the recursion
\begin{equation*}
{\mathrm {Stir}}_q(n,k) = [k]_q \cdot {\mathrm {Stir}}_q(n-1,k) + {\mathrm {Stir}}_q(n-1,k-1)
\end{equation*}
for $n, k \geq 1$ and the
initial condition ${\mathrm {Stir}}_q(0,k) = \delta_{0,k}$.
Deferring various definitions to Section~\ref{Background}, we state our main results.
\begin{itemize}
\item As {\em ungraded} $G_n$-modules we have
\begin{center}
$R_{n,k} \cong {\mathbb {C}}[{\mathcal{F}}_{n,k}]$ and $S_{n,k} \cong {\mathbb {C}}[{\mathcal{OP}}_{n,k}]$,
\end{center}
where ${\mathcal{F}}_{n,k}$ is the set of $k$-dimensional faces in the Coxeter complex attached
to $G_n$ and ${\mathcal{OP}}_{n,k}$ is the set of $r$-colored ordered set partitions of
$[n]$ with $k$ blocks (Corollary~\ref{ungraded-isomorphism-type}).
In particular, we have
\begin{align*}
\dim(R_{n,k}) &= \sum_{z = 0}^{n-k} {n \choose z} \cdot r^{n-z} \cdot k! \cdot {\mathrm {Stir}}(n-z,k), \\
\dim(S_{n,k}) &= r^n \cdot k! \cdot {\mathrm {Stir}}(n,k).
\end{align*}
\item The Hilbert series ${\mathrm {Hilb}}(R_{n,k}; q)$ and ${\mathrm {Hilb}}(S_{n,k};q)$ are given by
(Corollary~\ref{hilbert-series-corollary})
\begin{align*}
{\mathrm {Hilb}}(R_{n,k}; q) &= \sum_{z = 0}^{n-k} {n \choose z} \cdot q^{krz} \cdot
{\mathrm {rev}}_q( [r]_q^{n-z} \cdot [k]!_{q^r} \cdot {\mathrm {Stir}}_{q^r}(n-z,k)), \\
{\mathrm {Hilb}}(S_{n,k}; q) &= {\mathrm {rev}}_q( [r]_q^n \cdot [k]!_{q^r} \cdot {\mathrm {Stir}}_{q^r}(n,k)).
\end{align*}
\item Endow monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ with the lexicographic term order. The standard monomial
basis of $R_{n,k}$ is the collection of monomials $m = x_1^{a_1} \cdots x_n^{a_n}$ whose exponent
sequences $(a_1, \dots, a_n)$ are componentwise $\leq$ some shuffle of the sequences
$(r-1, 2r-1, \dots, kr-1)$ and $(\underbrace{kr, \dots, kr}_{n-k})$.
The standard monomials basis of $S_{n,k}$ is the collection of monomials
$m = x_1^{b_1} \cdots x_n^{b_n}$ whose exponent sequences $(b_1, \dots, b_n)$ are componentwise
$\leq$ some shuffle of the sequences $(r-1, 2r-1, \dots, kr-1)$ and $(\underbrace{kr-1, \dots, kr-1}_{n-k})$
(Theorem~\ref{artin-basis}).
\item There is a generalization of Bango and Biagoli's descent monomial basis of $R_n$
to the rings $R_{n,k}$ and $S_{n,k}$ (Theorems~\ref{s-gs-basis-theorem} and \ref{r-gs-basis-theorem}).
\item We have an explicit description of the {\em graded} isomorphism type of the $G_n$-modules
$R_{n,k}$ and $S_{n,k}$ in terms of standard Young tableaux
(Theorem~\ref{graded-isomorphism-type}).
\end{itemize}
Although the properties of the rings $R_{n,k}$ (and $S_{n,k}$) shown above give natural extensions of the
corresponding properties of $R_n$, the proofs of these results will be quite different.
Since the classical invariant ideal $I_n$ is cut out by a regular sequence
$e_1({\mathbf {x}}_n^r), \dots, e_n({\mathbf {x}}_n^r)$, standard tools from commutative algebra (the {\em Koszul complex})
can be used to derive the graded isomorphism type of $R_n$.
Since neither the dimension
$\dim(R_{n,k}) = \sum_{z = 0}^{n-k} {n \choose z} \cdot r^{n-z} \cdot k! \cdot {\mathrm {Stir}}(n-z,k)$ nor
$\dim(S_{n,k}) = r^n \cdot k! \cdot {\mathrm {Stir}}(n,k)$
have nice product formulas, we cannot hope to apply this technology to our situation.
Replacing the commutative algebra machinery used to analyze $R_n$
will be {\em combinatorial} commutative algebra machinery (Gr\"obner theory and straightening laws)
which will determine the structure of $R_{n,k}$.
Although some portions of our analysis will follow from the arguments of \cite{HRS}
after making the change of variables $(x_1, \dots, x_n) \mapsto (x_1^r, \dots, x_n^r)$,
other arguments will have to be significantly adapted to account for the possible presence of zero blocks.
The rest of the paper is organized as follows.
In {\bf Section~\ref{Background}} we give background material related to $r$-colored ordered set partitions,
the Coxeter complex of $G_n$, symmetric functions, the representation theory of $G_n$, and
Gr\"obner theory.
In {\bf Section~\ref{Polynomial}} we prove some polynomial and symmetric function identities that will
be helpful in later sections.
In {\bf Section~\ref{Hilbert}} we calculate the standard monomial bases of $R_{n,k}$ and $S_{n,k}$
with respect to the lexicographic term order and calculate the Hilbert series of these quotients.
In {\bf Section~\ref{Descent}} we present our generalizations of the Bango-Biagoli descent monomial
basis of $R_n$ to obtain descent monomial-type bases for $R_{n,k}$ and $S_{n,k}$.
In {\bf Section~\ref{Frobenius}} we derive the graded isomorphism type of the
$G_n$-modules $R_{n,k}$ and $S_{n,k}$.
We close in {\bf Section~\ref{Conclusion}} with some open questions.
| 3,462 | 83,708 |
en
|
train
|
0.173.3
|
\section{Background}
\label{Background}
\subsection{$r$-colored ordered set partitions}
We will make use of two orders on the
alphabet
\begin{equation*}
{\mathcal{A}}_r := \{i^c \,:\, i \in {\mathbb {Z}}_{> 0} \text{ and } 0 \leq c \leq r-1 \}
\end{equation*}
of $r$-colored positive integers. The first order $<$
weights colors more heavily than letter values, with higher colors being smaller:
\begin{equation*}
1^{r-1} < 2^{r-1} < \cdots < 1^{r-2} < 2^{r-2} < \cdots < 1^0 < 2^0 < \cdots.
\end{equation*}
The second order $\prec$ weights letter values more heavily than colors:
\begin{equation*}
1^{r-1} \prec 1^{r-2} \prec \cdots \prec 1^0 \prec 2^{r-1} \prec 2^{r-2} \prec \cdots \prec 2^0 \prec \cdots.
\end{equation*}
Let $w = w_1^{c_1} \dots w_n^{c_n}$ be any word in the alphabet ${\mathcal{A}}_r$.
The {\em descent set} and {\em ascent set} of $w$ are defined using the order $<$:
\begin{equation}
{\mathrm {Des}}(w) := \{1 \leq i \leq n-1 \,:\, w_i^{c_i} > w_{i+1}^{c_{i+1}} \}, \hspace{0.2in}
{\mathrm {Asc}}(w) := \{1 \leq i \leq n-1 \,:\, w_i^{c_i} < w_{i+1}^{c_{i+1}} \}.
\end{equation}
We write ${\mathrm {des}}(w) := |{\mathrm {Des}}(w)|$ and ${\mathrm {asc}}(w) := |{\mathrm {Asc}}(w)|$ for the number of descents
and ascents in $w$.
The {\em major index} ${\mathrm {maj}}(w)$ is given by the formula
\begin{equation}
{\mathrm {maj}}(w) := c(w) + r \cdot \sum_{i \in {\mathrm {Des}}(w)} i,
\end{equation}
where $c(w)$ denotes the sum of the colors of the letters in $w$.
This version of major index was defined by Haglund, Loehr, and Remmel in \cite{HLR}
(where it was termed `flag-major index').
Since we may view elements of $G_n$ as $r$-colored permutations, the objects
defined in the above paragraph make sense for $g \in G_n$.
For example, if $r = 3$ and $g = 3^0 4^1 6^2 2^0 5^2 1^2 \in G_6$, we have
${\mathrm {Des}}(g) = \{1,2,4,5\}, {\mathrm {Asc}}(g) = \{3\}, {\mathrm {des}}(g) = 4, {\mathrm {asc}}(g) = 1,$
and
\begin{equation*}
{\mathrm {maj}}(g) = (0 + 1 + 2 + 0 + 2 + 2) + 3 \cdot (1 + 2 + 4 + 5) = 43.
\end{equation*}
An {\em ordered set partition} is a set partition equipped with a total order on its blocks.
An {\em $r$-colored ordered set partition of size $n$}
is an ordered set partition $\sigma$ of $[n]$ in which every letter is
assigned a color in the set $\{0, 1, \dots, r-1\}$.
For example,
\begin{equation*}
\sigma = \{3^0,4^1\} \prec \{6^2\} \prec \{1^2,2^0,5^0\}
\end{equation*}
is a $3$-colored ordered set partition of size $6$ with $3$ blocks.
We let ${\mathcal{OP}}_{n,k}$ be the collection of $r$-colored ordered set partitions of size $n$ with $k$ blocks.
We have
\begin{equation}
|{\mathcal{OP}}_{n,k}| = r^n \cdot k! \cdot {\mathrm {Stir}}(n,k).
\end{equation}
We will often use bars to represent colored ordered set partitions more
succinctly. Here we write block elements in increasing order with respect to $\prec$. Our example ordered set partition
becomes
\begin{equation*}
\sigma = (3^0 4^1 \mid 6^2 \mid 1^2 2^0 5^2 ).
\end{equation*}
We also have a descent starred notation for colored ordered set partitions, where we order elements within blocks
in a decreasing fashion with respect to $<$. Our example ordered set partition becomes
\begin{equation*}
\sigma = 3^0_* 4^1 \, \, 6^2 \, \, 2^0_* 5^2_* 1^2.
\end{equation*}
Notice that we use the order $\prec$ for the bar notation, but the order $<$ for the star notation.
The star notation represents $\sigma \in {\mathcal{OP}}_{n,k}$ as a pair $\sigma = ( g, S)$,
where $g \in G_n$, $|S| = n-k$ and $S \subseteq {\mathrm {Des}}(g)$.
Our example ordered set partition becomes
\begin{equation*}
\sigma = (3^0 4^1 6^2 2^0 5^2 1^2, \{1,4,5\}).
\end{equation*}
Let $\sigma \in {\mathcal{OP}}_{n,k}$ and let $(g,S)$ be the descent starred representation of $\sigma$.
The {\em major index} of $\sigma = (g, S)$ is
\begin{equation}
{\mathrm {maj}}(\sigma) = {\mathrm {maj}}(g, S) = c(\sigma) + r \cdot \left[ \sum_{i \in {\mathrm {Des}}(g)} i
- \sum_{i \in S} |{\mathrm {Des}}(g) \cap \{i, i+1, \dots, n\}| \right],
\end{equation}
where $c(\sigma)$ denotes the sum of the colors in $\sigma$.
In the example above, we have
\begin{equation*}
{\mathrm {maj}}(3^0_* 4^1 \, \, 6^2 \, \, 2^0_* 5^2_* 1^2) =
(0 + 1 + 2 + 0 + 2 + 2) + 3 \cdot [ (1 + 2 + 4 + 5) - (4 + 2 + 1) ] = 22.
\end{equation*}
Whereas the definition of ${\mathrm {maj}}$ for colored ordered set partitions used the order $<$ to compare elements,
the definition of ${\mathrm {coinv}}$ uses the order $\prec$. In particular, let $\sigma$ be a colored ordered set partition.
A {\em coinversion pair} in $\sigma$ is a pair of colored letters $i^c \preceq j^d$ appearing in $\sigma$ such that
\begin{equation*}
\begin{cases}
\text{at least one of $i^c$ and $j^d$ is $\prec$-minimal in its block in $\sigma$,} \\
\text{$i^c$ and $j^d$ belong to different blocks of $\sigma$, and} \\
\text{if $i^c$'s block is to the right of $j^d$'s block, then only $j^d$ is $\prec$-minimal in its block.}
\end{cases}
\end{equation*}
In our example $\sigma = (3^0 4^1 \mid 6^2 \mid 1^2 2^0 5^2 )$,
the
coinversion pairs are $3^0 6^2, 2^0 3^0 , 3^0 5^2, 2^0 6^2, 4^1 6^2,$ and $5^2 6^2$.
The statistic ${\mathrm {coinv}}(\sigma)$ is defined by
\begin{equation}
{\mathrm {coinv}}(\sigma) = [n\cdot(r-1) - c(\sigma)] + r \cdot (\text{number of coinversion pairs in $\sigma$}).
\end{equation}
In our example we have
\begin{equation*}
{\mathrm {coinv}}(3^0 4^1 \mid 6^2 \mid 1^2 2^0 5^2 ) = [6 \cdot 2 - (0 + 1 + 2 + 2 + 0 + 2)] + 3 \cdot 6 = 23.
\end{equation*}
In particular, whereas the statistic ${\mathrm {maj}}$ involves a sum over colors, the statistic ${\mathrm {coinv}}$ involves a sum
over {\em complements} of colors.
The statistic ${\mathrm {coinv}}$ on $r$-colored $k$-block ordered set partitions of $[n]$ is complementary to the statistic
${\mathrm {inv}}$ defined in \cite[Sec. 4]{Rhoades}.
We need an extension of colored set partitions involving repeated letters.
An {\em $r$-colored ordered multiset partition} $\mu$ is a sequence of finite nonempty
sets $\mu = (M_1, \dots, M_k)$ of elements from the alphabet ${\mathcal{A}}_r$.
The {\em size} of $\mu$ is $|M_1| + \cdots |M_k|$ and we say that $\mu$ has {\em $k$ blocks}.
For example, we have that $\mu = (2^1 2^0 3^1 \mid 1^2 3^1 \mid 2^0 4^2 )$ is a
$3$-colored ordered multiset partition
of size $7$ with $3$ blocks.
We emphasize that the blocks of ordered multiset partitions are {\em sets}; there
are no repeated letters within blocks (although the same letter can occur with different colors within a single block).
If $\mu$ is an ordered multiset partition, the statistics ${\mathrm {coinv}}(\mu)$ and ${\mathrm {maj}}(\mu)$ have the same definitions
as in the case of no repeated letters.
\subsection{$G_n$-faces}
To describe the combinatorics of the rings $R_{n,k}$,
we introduce the following concept of a $G_n$-face.
In the following definition we require $r \geq 2$.
\begin{defn}
\label{g-face}
A {\em $G_n$-face} is an ordered set partition
$\sigma = (B_1 \mid B_2 \mid \cdots \mid B_m)$ of $[n]$ such that the letters in every block of $\sigma$,
with the possible exception of the first block $B_1$, are decorated by the colors $\{0, 1, \dots, r-1\}$.
\end{defn}
Let $\sigma = (B_1 \mid B_2 \mid \cdots \mid B_m)$ be an $G_n$-face. If the letters in $B_1$ are uncolored, then
$B_1$ is called the {\em zero block} of $\sigma$. The {\em dimension} of $\sigma$ is the number of nonzero blocks
in $\sigma$. Let ${\mathcal{F}}_{n,k}$ denote the set of $G_n$-faces of dimension $k$.
For example, if $r = 3$ we have
\begin{align*}
( 2 5 \mid 1^1 3^2 6^2 \mid 4^1 ) &\in {\mathcal{F}}_{6,2} \text{ and } \\
( 2^2 5^1 \mid 1^1 3^2 6^2 \mid 4^1) &\in {\mathcal{F}}_{6,3},
\end{align*}
where the lack of colors on the letters of the first block $\{2,5\}$ of the top face indicates that $\{2,5\}$ is a
zero block. When $k = n$, we have ${\mathcal{F}}_{n,n} = {\mathcal{OP}}_{n,n} = G_n$ as there cannot be a zero block.
The notation {\em face} in Definition~\ref{g-face} comes from the identification of the $k$-dimensional
$G_n$-faces with the $k$-dimensional faces in the Coxeter complex of $G_n$.
The set ${\mathcal{F}}_{n,k}$ may also be identified with the
collection of rank $k$ elements in the Dowling lattice $Q_n(\Gamma)$
to a group $\Gamma$ of size $r$ (see \cite{Dowling}). By considering the possible sizes of zero
blocks, we see that the number of faces in ${\mathcal{F}}_{n,k}$ is
\begin{equation}
|{\mathcal{F}}_{n,k}| = \sum_{z = 0}^{n-k} {n \choose z} \cdot r^{n-z} \cdot k! \cdot {\mathrm {Stir}}(n-z,k).
\end{equation}
We will consider an action of the group $G_n$ on ${\mathcal{F}}_{n,k}$. To describe this action it suffices
to describe the action of permutation matrices ${\mathfrak{S}}_n \subseteq G_n$
and the diagonal subgroup ${\mathbb {Z}}_r \times \cdots \times {\mathbb {Z}}_r \subseteq G_n$.
If $\pi = \pi_1 \dots \pi_n \in {\mathfrak{S}}_n$, then
$\pi$ acts on $G_n$ by swapping letters while preserving colors.
For example, if $\pi = 614253 \in {\mathfrak{S}}_6$, then
\begin{equation*}
\pi. (25 \mid 1^1 3^2 6^2 \mid 4^1) = (15 \mid 6^1 4^2 3^2 \mid 2^1) = (15 \mid 3^2 4^2 6^1 \mid 2^1).
\end{equation*}
A diagonal matrix $g = \mathrm{diag}(\zeta^{c_1}, \dots, \zeta^{c_n})$ acts by increasing the color of the letter
$i$ by $c_i$ (mod $r$), while leaving elements in the zero block uncolored.
For example, if $r = 3$
an example action of the diagonal matrix $g = \mathrm{diag}(\zeta, \zeta^2, \zeta^2, \zeta, \zeta^2, \zeta) \in G_6$ is
\begin{equation*}
g. (25 \mid 1^1 3^2 6^2 \mid 4^1) = (25 \mid 1^2 3^1 6^0 \mid 4^2).
\end{equation*}
It is clear that the action of $G_n$ on ${\mathcal{F}}_{n,k}$ preserves the subset ${\mathcal{OP}}_{n,k}$ of $r$-colored
ordered set partitions.
We extend the definition of ${\mathrm {coinv}}$ to $G_n$-faces as follows.
There is a natural map
\begin{equation}
\pi: {\mathcal{F}}_{n,k} \rightarrow \bigcup_{z = 0}^{n-k} {\mathcal{OP}}_{n-z,k}
\end{equation}
which removes the zero block $Z$ of a $G_n$-face
(if present), and then maps the letters in $[n] - Z$ onto $\{1, 2, \dots, n - |Z| \}$ via an order-preserving
bijection while preserving colors. For example, we have
\begin{equation*}
\pi: (2 5 \mid 1^1 3^2 6^2 \mid 4^1) \mapsto (1^1 2^2 4^2 \mid 3^1).
\end{equation*}
If $\sigma$ is a $G_n$-face whose zero block has size $z$, we define ${\mathrm {coinv}}(\sigma)$ by
\begin{equation}
{\mathrm {coinv}}(\sigma) := krz + {\mathrm {coinv}}(\pi(\sigma)).
\end{equation}
In the $r = 3$ example above, we have
\begin{equation*}
{\mathrm {coinv}}(2 5 \mid 1^1 3^2 6^2 \mid 4^1) = 2 \cdot 3 \cdot 2 + {\mathrm {coinv}}(1^1 2^2 4^2 \mid 3^1) = 12 + 8 = 20.
\end{equation*}
| 4,091 | 83,708 |
en
|
train
|
0.173.4
|
\subsection{Symmetric functions}
For $n \geq 0$,
a {\em (weak) composition of $n$} is a sequence $\alpha = (\alpha_1, \dots, \alpha_k)$
of nonnegative integers with $\alpha_1 + \cdots + \alpha_k = n$.
We write $\alpha \models n$ or $|\alpha| = n$ to indicate that $\alpha$ is a composition of $n$.
A {\em partition of $n$} is a composition $\lambda$ of $n$ whose parts are positive and weakly
decreasing. We write $\lambda \vdash n$ to indicate that $\lambda$ is a partition of $n$.
If $\lambda$ and $\mu$ are partitions (of any size) we say that $\lambda$ {\em dominates} $\mu$ and
write $\lambda \geq_{dom} \mu$ if
$\lambda_1 + \cdots + \lambda_i \geq \mu_1 + \cdots + \mu_i$ for all $i \geq 1$.
The {\em Ferrers diagram} of a partition $\lambda$
(in English notation) consists of $\lambda_i$ left-justified boxes in row $i$.
The Ferrers diagram of $(4,2,2) \vdash 8$ is shown below.
The {\em conjugate} $\lambda'$ of a partition $\lambda$ is obtained by reflecting the Ferrers diagram across
its main diagonal. For example, we have $(4,2,2)' = (3,3,1,1)$.
\begin{small}
\begin{center}
\begin{Young}
& & & \cr
& \cr
& \cr
\end{Young}
\end{center}
\end{small}
For an infinite sequence of variables ${\mathbf {y}} = (y_1, y_2, \dots )$, let $\Lambda({\mathbf {y}})$ denote the ring of symmetric
functions in the variable set ${\mathbf {y}}$ with coefficients in the field ${\mathbb {Q}}(q)$.
The ring $\Lambda({\mathbf {y}}) = \bigoplus_{n \geq 0} \Lambda({\mathbf {y}})_n$ is graded by
degree. The degree $n$ piece $\Lambda({\mathbf {y}})_n$ has vector space dimension equal to the number of partitions of $n$.
For a partition $\lambda$, let
\begin{center}
$\begin{array}{ccccc}
m_{\lambda}({\mathbf {y}}), & e_{\lambda}({\mathbf {y}}), & h_{\lambda}({\mathbf {y}}), & s_{\lambda}({\mathbf {y}})
\end{array}$
\end{center}
be the corresponding {\em monomial,
elementary, (complete) homogeneous,} and {\em Schur} symmetric functions.
As $\lambda$ varies over the collection of all partitions, these symmetric functions give four different bases
for $\Lambda({\mathbf {y}})$.
Given any composition $\beta$ whose nonincreasing rearrangement is the partition $\lambda$,
we extend this notation by setting $e_{\beta}({\mathbf {y}}) := e_{\lambda}({\mathbf {y}})$ and
$h_{\beta}({\mathbf {y}}) := h_{\lambda}({\mathbf {y}})$.
Let $\omega: \Lambda({\mathbf {y}}) \rightarrow \Lambda({\mathbf {y}})$ be the linear map
which sends
$s_{\lambda}({\mathbf {y}})$ to $s_{\lambda'}({\mathbf {y}})$ for all partitions $\lambda$.
The map $\omega$ is an involution and a ring automorphism. For any partition $\lambda$,
we have $\omega(e_{\lambda}({\mathbf {y}})) = h_{\lambda}({\mathbf {y}})$ and
$\omega(h_{\lambda}({\mathbf {y}})) = e_{\lambda}({\mathbf {y}})$.
We let $\langle \cdot, \cdot \rangle$ denote the {\em Hall inner product} on $\Lambda({\mathbf {y}})$. This can be
defined by either of the rules $\langle s_{\lambda}({\mathbf {y}}), s_{\mu}({\mathbf {y}}) \rangle = \delta_{\lambda,\mu}$
or $\langle h_{\lambda}({\mathbf {y}}), m_{\mu}({\mathbf {y}}) \rangle = \delta_{\lambda, \mu}$ for all partitions $\lambda, \mu$.
If $F({\mathbf {y}}) \in \Lambda({\mathbf {y}})$ is any symmetric function, let $F({\mathbf {y}})^{\perp}$ be the linear operator on
$\Lambda({\mathbf {y}})$ which is adjoint to the operation of multiplication by $F({\mathbf {y}})$. That is, we have
\begin{equation}
\langle F({\mathbf {y}})^{\perp} G({\mathbf {y}}), H({\mathbf {y}}) \rangle = \langle G({\mathbf {y}}), F({\mathbf {y}}) H({\mathbf {y}}) \rangle
\end{equation}
for all symmetric functions $G({\mathbf {y}}), H({\mathbf {y}}) \in \Lambda({\mathbf {y}})$.
The representation theory of $G_n$ is analogous to that of ${\mathfrak{S}}_n$,
but involves $r$-tuples of objects.
Given any $r$-tuple $\bm{o} = (o^{(1)}, o^{(2)}, \dots, o^{(r-1)}, o^{(r)})$ of objects, we define the {\em dual}
$\bm{o^*}$ to be the $r$-tuple
\begin{equation}
\bm{o^*} := (o^{(r-1)}, \dots, o^{(2)}, o^{(1)}, o^{(r)})
\end{equation}
obtained by reversing the first $r-1$ terms in the sequence $\bm{o}$.
At the algebraic level, the operator $\bm{o} \mapsto \bm{o^*}$ corresponds to the
entrywise action of
complex conjugation
on matrices in $G_n$ (which is trivial when $r = 1$ or $r = 2$).
If $1 \leq i \leq r$, we define the {\em dual} $i^*$ of $i$ by the rule
\begin{equation}
i^* = \begin{cases}
r-i & 1 \leq i \leq r-1 \\
r & i = r.
\end{cases}
\end{equation}
We therefore have
\begin{equation}
\bm{o^*} = (o^{(1^*)}, \dots, o^{(r^*)}) \text{ if } \bm{o} = (o^{(1)}, \dots, o^{(r)}).
\end{equation}
For a positive integer $n$, an {\em $r$-composition}
$\bm{\alpha}$ of $n$ is an $r$-tuple of compositions
$\bm{\alpha} = (\alpha^{(1)}, \dots, \alpha^{(r)})$ which satisfies
$|\bm{\alpha}| := |\alpha^{(1)}| + \cdots + |\alpha^{(r)}| = n$.
We write $\bm{\alpha} \models_r n$ to indicate that $\bm{\alpha}$ is an $r$-composition of $n$.
Similarly, an {\em $r$-partition}
${ \bm{\lambda} } = (\lambda^{(1)}, \dots, \lambda^{(r)})$ of $n$ is an $r$-tuple of partitions with
$|{ \bm{\lambda} }| := |\lambda^{(1)}| + \cdots + |\lambda^{(r)}| = n$.
We write ${ \bm{\lambda} } \vdash_r n$ to mean that ${ \bm{\lambda} }$ is an $r$-partition of $n$.
The {\em conjugate} of an $r$-partition ${ \bm{\lambda} } = (\lambda^{(1)}, \dots, \lambda^{(r)})$
is defined componentwise;
$\bm{\lambda'} := (\lambda^{(1)'}, \dots, \lambda^{(r)'})$.
The {\em Ferrers diagram} of an $r$-partition $\bm{\lambda} = (\lambda^{(1)}, \dots, \lambda^{(r)})$
is the $r$-tuple of Ferrers diagrams of its constituent partitions. The Ferrers diagram of the
$3$-partition $((3,2), \varnothing, (2,2)) \vdash_3 9$ is shown below.
\begin{center}
\begin{small}
\begin{Young}
& & \cr
& \cr
\end{Young}, \, \,
\begin{large}$\varnothing$\end{large}, \, \,
\begin{Young}
& \cr
& \cr
\end{Young}
\end{small}
\end{center}
Let ${ \bm{\lambda} } = (\lambda^{(1)}, \dots, \lambda^{(r)}) \vdash_r n$ be an $r$-partition of $n$.
A {\em semistandard tableau ${ \bm{T}}$ of shape ${ \bm{\lambda} }$} is a tuple
${ \bm{T}} = (T^{(1)}, \dots, T^{(r)})$, where $T^{(i)}$ is a filling of the boxes of $\lambda^{(i)}$ with positive integers
which increase weakly across rows and strictly down columns.
A semistandard tableau ${ \bm{T}}$ of shape ${ \bm{\lambda} }$ is {\em standard} if the entries
$1, 2, \dots, n$ all appear precisely once in ${ \bm{T}}$.
Let ${\mathrm {SYT}}^r(n)$ denote the collection of all possible standard tableaux with $r$ components and $n$
boxes.
For example, let ${ \bm{\lambda} }= ((3,2), \varnothing, (2,2)) \vdash_3 9$.
A semistandard tableau ${ \bm{T}} = (T^{(1)}, T^{(2)}, T^{(3)})$ of shape ${ \bm{\lambda} }$ is
\begin{center}
\begin{small}
\begin{Young}
1 & 3 & 3\cr
3 & 4 \cr
\end{Young}, \, \,
\begin{large}$\varnothing$\end{large}, \, \,
\begin{Young}
1 & 3 \cr
4 & 4 \cr
\end{Young}
\end{small}.
\end{center}
A standard tableau of shape ${ \bm{\lambda} }$ is
\begin{center}
\begin{small}
\begin{Young}
3 & 6 & 9\cr
5 & 7 \cr
\end{Young}, \, \,
\begin{large}$\varnothing$\end{large}, \, \,
\begin{Young}
1 & 4 \cr
2 &8 \cr
\end{Young}
\end{small}.
\end{center}
Let $\bm{T} = (T^{(1)}, \dots, T^{(r})) \in {\mathrm {SYT}}^r(n)$
be a standard tableau with $n$ boxes. A letter $1 \leq i \leq n-1$ is called a
{\em descent} of $\bm{T}$ if
\begin{itemize}
\item the letters $i$ and $i+1$ appear in the same component $T^{(j)}$ of $\bm{T}$, and
$i+1$ appears in a row below $i$
in $T^{(j)}$, or
\item the letter $i+1$ appears in a component of $\bm{T} = (T^{(1)}, \dots, T^{(r)})$ strictly to the right of the component
containing $i$.
\end{itemize}
We let ${\mathrm {Des}}(\bm{T}) := \{ 1 \leq i \leq n \,:\, \text{$i$ is a descent of $\bm{T}$} \}$ denote the collection of all
descents of $\bm{T}$ and let ${\mathrm {des}}(\bm{T}) := | {\mathrm {Des}}(\bm{T}) |$ denote the number of descents of $T$.
The {\em major index} of $\bm{T}$ is
\begin{equation}
{\mathrm {maj}}(\bm{T}) := r \cdot \sum_{i \in {\mathrm {Des}}(\bm{T})} i + \sum_{j = 1}^r (j-1) \cdot |T^{(j)}|,
\end{equation}
where $|T^{(j)}|$ is the number of boxes in the component $T^{(j)}$.
For example, if $\bm{T} = (T^{(1)}, T^{(2)}, T^{(3)})$ is the standard tableau above, then
${\mathrm {Des}}(\bm{T}) = \{1,3,6,7\}, {\mathrm {des}}(\bm{T}) = 4$, and
\begin{equation*}
{\mathrm {maj}}(\bm{T}) = 3 \cdot (1 + 3 + 6 + 7) + (0 \cdot 5 + 1 \cdot 0 + 2 \cdot 4) = 59.
\end{equation*}
For $1 \leq i \leq r$, let ${\mathbf {x}}^{(i)} = (x_1^{(i)}, x_2^{(i)}, \dots )$ be an infinite list of variables and let
$\Lambda({\mathbf {x}}^{(i)})$ be the ring of symmetric functions in the variables ${\mathbf {x}}^{(i)}$ with coefficients in ${\mathbb {Q}}(q)$.
We use ${\mathbf {x}}$ to denote the union of the $r$ variable sets ${\mathbf {x}}^{(1)}, \dots, {\mathbf {x}}^{(r)}$.
Let $\Lambda^r({\mathbf {x}})$ be the tensor product
\begin{equation*}
\Lambda^r({\mathbf {x}}) = \Lambda({\mathbf {x}}^{(1)}) \otimes \cdots \otimes \Lambda({\mathbf {x}}^{(r)}).
\end{equation*}
We can think of $\Lambda^r({\mathbf {x}})$ as the ring of formal power series in ${\mathbb {Q}}(q)[[{\mathbf {x}}]]$ which are symmetric
in the variable sets ${\mathbf {x}}^{(1)}, \dots, {\mathbf {x}}^{(1)}$ separately.
The algebra $\Lambda^r({\mathbf {x}})$ is spanned by generating tensors of the form
\begin{equation*}
F_1({\mathbf {x}}^{(1)}) \cdot \ldots \cdot F_r({\mathbf {x}}^{(r)}) :=
F_1({\mathbf {x}}^{(1)}) \otimes \cdots \otimes F_r({\mathbf {x}}^{(r)}),
\end{equation*}
where $F_i({\mathbf {x}}^{(i)}) \in \Lambda({\mathbf {x}}^{(i)})$ is a symmetric function in the variables ${\mathbf {x}}^{(i)}$.
The algebra $\Lambda^r({\mathbf {x}})$
is graded via
\begin{equation*}
\deg(F_1({\mathbf {x}}^{(1)}) \cdot \ldots \cdot F_{r}({\mathbf {x}}^{(r)})) :=
\deg(F_1({\mathbf {x}}^{(1)})) + \cdots + \deg(F_{r}({\mathbf {x}}^{(r)})),
\end{equation*}
where the $F_i({\mathbf {x}}^{(i)})$ are homogeneous.
| 3,619 | 83,708 |
en
|
train
|
0.173.5
|
For example, let ${ \bm{\lambda} }= ((3,2), \varnothing, (2,2)) \vdash_3 9$.
A semistandard tableau ${ \bm{T}} = (T^{(1)}, T^{(2)}, T^{(3)})$ of shape ${ \bm{\lambda} }$ is
\begin{center}
\begin{small}
\begin{Young}
1 & 3 & 3\cr
3 & 4 \cr
\end{Young}, \, \,
\begin{large}$\varnothing$\end{large}, \, \,
\begin{Young}
1 & 3 \cr
4 & 4 \cr
\end{Young}
\end{small}.
\end{center}
A standard tableau of shape ${ \bm{\lambda} }$ is
\begin{center}
\begin{small}
\begin{Young}
3 & 6 & 9\cr
5 & 7 \cr
\end{Young}, \, \,
\begin{large}$\varnothing$\end{large}, \, \,
\begin{Young}
1 & 4 \cr
2 &8 \cr
\end{Young}
\end{small}.
\end{center}
Let $\bm{T} = (T^{(1)}, \dots, T^{(r})) \in {\mathrm {SYT}}^r(n)$
be a standard tableau with $n$ boxes. A letter $1 \leq i \leq n-1$ is called a
{\em descent} of $\bm{T}$ if
\begin{itemize}
\item the letters $i$ and $i+1$ appear in the same component $T^{(j)}$ of $\bm{T}$, and
$i+1$ appears in a row below $i$
in $T^{(j)}$, or
\item the letter $i+1$ appears in a component of $\bm{T} = (T^{(1)}, \dots, T^{(r)})$ strictly to the right of the component
containing $i$.
\end{itemize}
We let ${\mathrm {Des}}(\bm{T}) := \{ 1 \leq i \leq n \,:\, \text{$i$ is a descent of $\bm{T}$} \}$ denote the collection of all
descents of $\bm{T}$ and let ${\mathrm {des}}(\bm{T}) := | {\mathrm {Des}}(\bm{T}) |$ denote the number of descents of $T$.
The {\em major index} of $\bm{T}$ is
\begin{equation}
{\mathrm {maj}}(\bm{T}) := r \cdot \sum_{i \in {\mathrm {Des}}(\bm{T})} i + \sum_{j = 1}^r (j-1) \cdot |T^{(j)}|,
\end{equation}
where $|T^{(j)}|$ is the number of boxes in the component $T^{(j)}$.
For example, if $\bm{T} = (T^{(1)}, T^{(2)}, T^{(3)})$ is the standard tableau above, then
${\mathrm {Des}}(\bm{T}) = \{1,3,6,7\}, {\mathrm {des}}(\bm{T}) = 4$, and
\begin{equation*}
{\mathrm {maj}}(\bm{T}) = 3 \cdot (1 + 3 + 6 + 7) + (0 \cdot 5 + 1 \cdot 0 + 2 \cdot 4) = 59.
\end{equation*}
For $1 \leq i \leq r$, let ${\mathbf {x}}^{(i)} = (x_1^{(i)}, x_2^{(i)}, \dots )$ be an infinite list of variables and let
$\Lambda({\mathbf {x}}^{(i)})$ be the ring of symmetric functions in the variables ${\mathbf {x}}^{(i)}$ with coefficients in ${\mathbb {Q}}(q)$.
We use ${\mathbf {x}}$ to denote the union of the $r$ variable sets ${\mathbf {x}}^{(1)}, \dots, {\mathbf {x}}^{(r)}$.
Let $\Lambda^r({\mathbf {x}})$ be the tensor product
\begin{equation*}
\Lambda^r({\mathbf {x}}) = \Lambda({\mathbf {x}}^{(1)}) \otimes \cdots \otimes \Lambda({\mathbf {x}}^{(r)}).
\end{equation*}
We can think of $\Lambda^r({\mathbf {x}})$ as the ring of formal power series in ${\mathbb {Q}}(q)[[{\mathbf {x}}]]$ which are symmetric
in the variable sets ${\mathbf {x}}^{(1)}, \dots, {\mathbf {x}}^{(1)}$ separately.
The algebra $\Lambda^r({\mathbf {x}})$ is spanned by generating tensors of the form
\begin{equation*}
F_1({\mathbf {x}}^{(1)}) \cdot \ldots \cdot F_r({\mathbf {x}}^{(r)}) :=
F_1({\mathbf {x}}^{(1)}) \otimes \cdots \otimes F_r({\mathbf {x}}^{(r)}),
\end{equation*}
where $F_i({\mathbf {x}}^{(i)}) \in \Lambda({\mathbf {x}}^{(i)})$ is a symmetric function in the variables ${\mathbf {x}}^{(i)}$.
The algebra $\Lambda^r({\mathbf {x}})$
is graded via
\begin{equation*}
\deg(F_1({\mathbf {x}}^{(1)}) \cdot \ldots \cdot F_{r}({\mathbf {x}}^{(r)})) :=
\deg(F_1({\mathbf {x}}^{(1)})) + \cdots + \deg(F_{r}({\mathbf {x}}^{(r)})),
\end{equation*}
where the $F_i({\mathbf {x}}^{(i)})$ are homogeneous.
The standard bases of $\Lambda^r({\mathbf {x}})$ are obtained from those of
$\Lambda({\mathbf {x}}^{(1)}), \dots, \Lambda({\mathbf {x}}^{(r)})$ by multiplication. More precisely,
let $\bm{\lambda} = (\lambda^{(1)}, \dots, \lambda^{(r)})$ be an $r$-partition.
We define elements
\begin{equation*}
\bm{m_{\lambda}}({\mathbf {x}}), \bm{e_{\lambda}}({\mathbf {x}}), \bm{h_{\lambda}}({\mathbf {x}}), \bm{s_{\lambda}}({\mathbf {x}})
\in \Lambda^r({\mathbf {x}})
\end{equation*}
by
\begin{center}
$\begin{array}{cc}
\bm{m_{\lambda}}({\mathbf {x}}) := m_{\lambda^{(1)}}({\mathbf {x}}^{(1)}) \cdots m_{\lambda^{(r)}}({\mathbf {x}}^{(r)}), &
\bm{e_{\lambda}}({\mathbf {x}}) := e_{\lambda^{(1)}}({\mathbf {x}}^{(1)}) \cdots e_{\lambda^{(r)}}({\mathbf {x}}^{(r)}), \\
\bm{h_{\lambda}}({\mathbf {x}}) := h_{\lambda^{(1)}}({\mathbf {x}}^{(1)}) \cdots h_{\lambda^{(r)}}({\mathbf {x}}^{(r)}),
& \bm{s_{\lambda}}({\mathbf {x}}) := s_{\lambda^{(1)}}({\mathbf {x}}^{(1)}) \cdots s_{\lambda^{(r)}}({\mathbf {x}}^{(r)}).
\end{array}$
\end{center}
As ${ \bm{\lambda} }$ varies over the collection of all $r$-partitions, any of the sets
$\{ \bm{m_{\lambda}}({\mathbf {x}}) \}, \{ \bm{e_{\lambda}}({\mathbf {x}}) \}, \{ \bm{h_{\lambda}}({\mathbf {x}}) \},$ or
$\{ \bm{s_{\lambda}}({\mathbf {x}}) \}$ forms a basis for $\Lambda^r({\mathbf {x}})$.
If ${ \bm{\beta} } = (\beta^{(1)}, \dots, \beta^{(r)})$ is an $r$-composition, we extend this notation by setting
\begin{center}
$\begin{array}{cc}
\bm{e_{\beta}}({\mathbf {x}}) := e_{\beta^(1)}({\mathbf {x}}^{(1)}) \cdots e_{\beta^{(r)}}({\mathbf {x}}^{(r)}), &
\bm{h_{\beta}}({\mathbf {x}}) := h_{\beta^(1)}({\mathbf {x}}^{(1)}) \cdots h_{\beta^{(r)}}({\mathbf {x}}^{(r)}).
\end{array}$
\end{center}
The Schur functions $\bm{s_{\lambda}}({\mathbf {x}})$ admit the following combinatorial
description. If ${ \bm{T}} = (T^{(1)}, \dots, T^{(r)})$ is a semistandard tableau with $r$ components, let
${\mathbf {x}}^{{ \bm{T}}}$ be the monomial in the variable set ${\mathbf {x}}$ where the exponent of $x^{(i)}_j$ equals the multiplicity of $j$
in the tableau $T^{(i)}$.
For example, if $r = 3$ and ${ \bm{T}} = (T^{(1)}, T^{(2)}, T^{(3)})$ is as above, we have
\begin{equation*}
{\mathbf {x}}^{{ \bm{T}}} = (x^{(1)}_1)^1 (x^{(1)}_3)^3 (x^{(1)}_4)^1 (x^{(3)}_1)^1 (x^{(3)}_3)^1 (x^{(3)}_4)^2.
\end{equation*}
Similarly, if $w$ is any word in the $r$-colored positive integers ${\mathcal{A}}_r$, let ${\mathbf {x}}^w$ be the monomial in ${\mathbf {x}}$
where the exponent of $x^{(i)}_j$ equals the multiplicity of $j^{i-1}$ in the word $w$.
Also, if
${ \bm{\beta} } = (\beta^{(1)}, \dots, \beta^{(r)})$ is an $r$-composition, define the monomial
${\mathbf {x}}^{{ \bm{\beta} }}$ by
\begin{equation}
{\mathbf {x}}^{{ \bm{\beta} }} := (x^{(1)}_1)^{\beta^{(1)}_1} (x^{(1)}_2)^{\beta^{(1)}_2} \cdots (x^{(2)}_1)^{\beta^{(2)}_1}
(x^{(2)}_2)^{\beta^{(2)}_2} \cdots
\end{equation}
Given an $r$-partition ${ \bm{\lambda} } \vdash_r n$, we have
\begin{equation}
\bm{s_{\lambda}}({\mathbf {x}}) = \sum_{{ \bm{T}}} {\mathbf {x}}^{{ \bm{T}}},
\end{equation}
where the sum is over all semistandard tableaux ${ \bm{T}}$ of shape ${ \bm{\lambda} }$.
The Hall inner product $\langle \cdot, \cdot \rangle$ extends to $\Lambda^r({\mathbf {x}})$ by the rule
\begin{equation}
\langle \bm{s_{\lambda}}({\mathbf {x}}), \bm{s_{\mu^*}}({\mathbf {x}}) \rangle =
\langle \bm{h_{\lambda}}({\mathbf {x}}), \bm{m_{\mu^*}}({\mathbf {x}}) \rangle = \delta_{{ \bm{\lambda} }, \bm{\mu}}
\end{equation}
for all $r$-partitions ${ \bm{\lambda} }$ and $\bm{\mu}$.
The presence of duals in this definition comes from the nontriviality of complex conjugation on
$G_n$ for $r > 2$.
The involution $\omega$ is defined on $\Lambda^r({\mathbf {x}}) = \Lambda({\mathbf {x}}^{(1)}) \otimes \cdots \otimes \Lambda({\mathbf {x}}^{(r)})$
by applying $\omega$ in each component separately.
The map $\omega$ is an isometry of the inner product $\langle \cdot, \cdot \rangle$.
If $\bm{F(x)} \in \Lambda^r({\mathbf {x}})$,
we let $\bm{F(x)}^{\perp}$ be the operator on $\Lambda^r({\mathbf {x}})$ which is adjoint to multiplication by $\bm{F(x)}$ under the
inner product $\langle \cdot, \cdot \rangle$. In particular, if $j \geq 1$ and if $1 \leq i \leq r$, we have
$h_j({\mathbf {x}}^{(i)}), e_j({\mathbf {x}}^{(i)}) \in \Lambda^r({\mathbf {x}})$, so that
$h_j({\mathbf {x}}^{(i)})^{\perp}$ and $e_j({\mathbf {x}}^{(i)})^{\perp}$ make sense as linear operators on $\Lambda^r({\mathbf {x}})$.
These operators (and their `dual' versions
$h_j({\mathbf {x}}^{(i^*)})^{\perp}$ and $e_j({\mathbf {x}}^{(i^*)})^{\perp}$)
will play a key role in this paper.
| 3,108 | 83,708 |
en
|
train
|
0.173.6
|
\subsection{Representations of $G_n$}
In his thesis, Specht \cite{Specht} described the irreducible representations of $G_n$.
We recall his construction.
Given a matrix $g \in G_n$,
define numbers $\chi(g)$ and ${\mathrm {sign}}(g)$ by
\begin{align}
\chi(g) &:= \text{product of the nonzero entries in $g$}, \\
{\mathrm {sign}}(g) &:= \text{determinant of the permutation matrix underlying $g$}.
\end{align}
In particular, the number $\chi(g)$ is an $r^{th}$ root of unity and ${\mathrm {sign}}(g) = \pm 1$. Both of the functions
$\chi$ and ${\mathrm {sign}}$ are linear characters of $G_n$. In other words, we have
$\chi(gh) = \chi(g) \chi(h)$ and ${\mathrm {sign}}(g h) = {\mathrm {sign}}(g) {\mathrm {sign}}(h)$ for all $g, h \in G_n$.
It is well known that the irreducible
complex representations of the
symmetric group ${\mathfrak{S}}_n$ are indexed by partitions $\lambda \vdash n$. Given $\lambda \vdash n$,
let $S^{\lambda}$ be the corresponding irreducible ${\mathfrak{S}}_n$-module.
For example, we have that $S^{(n)}$ is the trivial representation of ${\mathfrak{S}}_n$ and $S^{(1^n)}$ is the sign
representation of ${\mathfrak{S}}_n$.
Let $V$ be a $G$-module and let $U$ be an ${\mathfrak{S}}_n$-module. We build a
$G_n$-module $V \wr U$ by letting $V \wr U = (V)^{\otimes n} \otimes U$ as a vector space and defining
the action of $G_n$ by
\begin{equation}
\mathrm{diag}(g_1, \dots, g_n).(v_1 \otimes \cdots \otimes v_n \otimes u) :=
(g_1.v_1) \otimes \cdots \otimes (g_n.v_n) \otimes u,
\end{equation}
for all diagonal matrices $\mathrm{diag}(g_1, \dots, g_n) \in G_n$, and
\begin{equation}
\pi.(v_1 \otimes \cdots \otimes v_n \otimes u) := v_{\pi^{-1}_1} \otimes \cdots \otimes v_{\pi^{-1}_n} \otimes (\pi.u),
\end{equation}
for all $\pi \in {\mathfrak{S}}_n \subseteq G_n$. If $V$ is an irreducible $G$-module and $U$
is an irreducible ${\mathfrak{S}}_n$-module, then $V \wr U$ is an irreducible $G_n$-module, but not all of the
irreducible $G_n$-modules arise in this way.
For any composition $\alpha = (\alpha_1, \dots , \alpha_r) \models n$ with $r$ parts,
the parabolic subgroup of
block diagonal matrices in $G_n$ with block sizes $\alpha_1, \dots, \alpha_r$
gives an inclusion
\begin{equation}
G_{\alpha} :=
G_{\alpha_1} \times \cdots \times G_{\alpha_r} \subseteq G_n.
\end{equation}
If $W_i$ is a $G_{\alpha_i}$-module for $1 \leq i \leq r$, the tensor product
$W_1 \otimes \cdots \otimes W_r$ is a $G_{\alpha}$-module and
the induction ${\mathrm {Ind}}_{G_{\alpha}}^{G_n}(W_1 \otimes \cdots \otimes W_r)$
is a $G_n$-module.
We index the irreducible representations of the cyclic group
$G = {\mathbb {Z}}_r = \langle \zeta \rangle$ in the following slightly nonstandard way.
For $1 \leq i \leq r$,
let $\rho_i: G \rightarrow GL_1({\mathbb {C}}) = {\mathbb {C}}^{\times}$ be the homomorphism
\begin{equation}
\rho_i: \zeta \mapsto \zeta^{-i}.
\end{equation}
and let $V_i$ be the vector space ${\mathbb {C}}$ with $G$-module structure given by $\rho_i$.
In particular, we have that $V_r$ is the trivial representation of $G$ and
$V_1, V_2, \dots, V_{r-1}$ are the nontrivial irreducible representations of $G$.
The irreducible modules for $G_n$ are indexed by $r$-partitions of $n$.
If $\bm{\lambda} = (\lambda^{(1)}, \dots, \lambda^{(r)}) \vdash_r n$ is an $r$-partition of $n$, let
$\alpha = (\alpha_1, \dots, \alpha_r) \models n$ be the composition whose parts are $\alpha_i := |\lambda^{(i)}|$.
Define
$\bm{S^{\lambda}}$ to be the
$G_n$-module given by
\begin{equation}
\bm{S^{\lambda}} := {\mathrm {Ind}}_{G_{\alpha}}^{G_n}
((V_1 \wr S^{\lambda^{(1)}}) \otimes \cdots \otimes (V_r \wr S^{\lambda^{(r)}})).
\end{equation}
Specht proved that the set $\{ \bm{S^{\lambda}} \,:\, \bm{\lambda} \vdash_r n \}$ forms a complete set of
nonisomorphic irreducible representations of $G_n$.
\begin{example}
For any $1 \leq i \leq r$, both of the functions
\begin{equation}
\begin{cases}
\chi^i: g \mapsto (\chi(g))^i \\
{\mathrm {sign}} \cdot \chi^i: g \mapsto {\mathrm {sign}}(g) \cdot (\chi(g))^i
\end{cases}
\end{equation}
on $G_n$ are linear characters.
We leave it for the reader to check that under the above classification we have
\begin{center}
$\begin{array}{cccc}
\chi^1 \leftrightarrow ((n), \varnothing, \dots, \varnothing), & &
{\mathrm {sign}} \cdot \chi^1 \leftrightarrow ((1^n), \varnothing \dots, \varnothing), \\
\chi^2 \leftrightarrow (\varnothing, (n), \dots, \varnothing), & &
{\mathrm {sign}} \cdot \chi^2 \leftrightarrow (\varnothing, (1^n), \dots, \varnothing), \\
\vdots & & \vdots \\
\chi^r \leftrightarrow (\varnothing, \varnothing, \dots, (n)), & &
{\mathrm {sign}} \cdot \chi^r \leftrightarrow (\varnothing, \varnothing, \dots, (1^n)).
\end{array}$
\end{center}
Since $\chi^r$ is the trivial character of $G_n$, the trivial representation
therefore corresponds to the $r$-partition $(\varnothing, \dots, \varnothing, (n))$.
\end{example}
Let $V$ be a finite-dimensional $G_n$-module. There exist unique integers $m_{\bm{\lambda}}$
such that
\begin{equation*}
V \cong \bigoplus_{\bm{\lambda} \vdash_r n} (\bm{S^{\lambda}})^{m_{\bm{\lambda}}}.
\end{equation*}
The {\em Frobenius character} ${\mathrm {Frob}}(V) \in \Lambda^r({\mathbf {x}})$ of $V$ is given by
\begin{equation}
{\mathrm {Frob}}(V) := \sum_{\bm{\lambda} \vdash_r n} m_{\bm{\lambda}} \bm{s_{\lambda}}({\mathbf {x}}).
\end{equation}
In particular, the multiplicity $m_{\bm{\lambda}}$ of $\bm{S^{\lambda}}$ in $V$ is
$\langle {\mathrm {Frob}}(V), \bm{s_{\lambda^*}}({\mathbf {x}}) \rangle$.
More generally, if $V = \oplus_{d \geq 0} V_d$ is a graded $G_n$-module with
each $V_d$ finite-dimensional, the
{\em graded Frobenius character} ${\mathrm {grFrob}}(V;q) \in \Lambda^r({\mathbf {x}})[[q]]$ of $V$ is
\begin{equation}
{\mathrm {grFrob}}(V;q) := \sum_{d \geq 0} {\mathrm {Frob}}(V_d) \cdot q^d.
\end{equation}
Also recall that
the {\em Hilbert series} ${\mathrm {Hilb}}(V;q)$ of $V$ is
\begin{equation}
{\mathrm {Hilb}}(V;q) := \sum_{d \geq 0} \dim(V_d) \cdot q^d.
\end{equation}
The Frobenius character is compatible with induction product in the following way.
Let $V$ be an $G_n$-module and let $W$ be a $G_m$ module.
The tensor product $V \otimes W$ is a $G_{(n,m)}$-module, so that
${\mathrm {Ind}}_{G_{(n,m)}}^{G_{n+m}} (V \otimes W)$ is a
$G_{n+m}$-module.
We have
\begin{equation}
{\mathrm {Frob}}({\mathrm {Ind}}_{G_{(n,m)}}^{G_{n+m}} (V \otimes W)) =
{\mathrm {Frob}}(V) \cdot {\mathrm {Frob}}(W),
\end{equation}
where the multiplication on the right-hand side takes place within $\Lambda^r({\mathbf {x}})$.
\subsection{Gr\"obner theory}
A total order $<$ on the monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ is called a {\em monomial order} if
\begin{itemize}
\item $1 \leq m$ for every monomial $m \in {\mathbb {C}}[{\mathbf {x}}_n]$, and
\item $m \leq m'$ implies $m \cdot m'' \leq m' \cdot m''$ for all monomials $m, m', m'' \in {\mathbb {C}}[{\mathbf {x}}_n]$.
\end{itemize}
In this paper we will only use the {\em lexicographic} monomial order defined by
$x_1^{a_1} \cdots x_n^{a_n} < x_1^{b_1} \cdots x_n^{b_n}$ if there exists $1 \leq i \leq n$ such that
$a_1 = b_1, \dots, a_{i-1} = b_{i-1}$, and $a_i < b_i$.
If $f \in {\mathbb {C}}[{\mathbf {x}}_n]$ is a nonzero polynomial and $<$ is a monomial order, let ${\mathrm {in}}_<(f)$ be the leading term of
$f$ with respect to the order $<$. If $I \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ is an ideal, the corresponding {\em initial ideal}
${\mathrm {in}}_<(I) \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ is the monomial ideal in ${\mathbb {C}}[{\mathbf {x}}_n]$ generated by the leading terms
of every nonzero polynomial in $I$:
\begin{equation}
{\mathrm {in}}_<(I) := \langle {\mathrm {in}}_<(f) \,:\, f \in I - \{0\} \rangle.
\end{equation}
The collection of monomials $m \in {\mathbb {C}}[{\mathbf {x}}_n]$ which are not contained in ${\mathrm {in}}_<(I)$, namely
\begin{equation}
\{ \text{monomials $m \in {\mathbb {C}}[{\mathbf {x}}_n]$} \,:\, {\mathrm {in}}(f) \nmid m \text{ for all $f \in I - \{0\}$} \}
\end{equation}
descends to a vector space basis for the quotient ${\mathbb {C}}[{\mathbf {x}}_n]/I$. This is called the
{\em standard monomial basis}.
A finite subset $B = \{g_1, \dots, g_m\} \subseteq I$ of nonzero polynomials in $I$ is called a {\em Gr\"obner basis}
of $I$ if ${\mathrm {in}}_<(I) = \langle {\mathrm {in}}_<(g_1), \dots, {\mathrm {in}}_<(g_m) \rangle$. A Gr\"obner basis $B$
is called {\em reduced} if
\begin{itemize}
\item the leading coefficient of $g_i$ is $1$ for all $1 \leq i \leq m$, and
\item for $i \neq j$, the monomial ${\mathrm {in}}(g_i)$ does not divide any of the terms appearing in $g_j$.
\end{itemize}
After fixing a monomial order, every ideal $I \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ has a unique reduced Gr\"obner basis.
| 3,278 | 83,708 |
en
|
train
|
0.173.7
|
\section{Polynomial identities}
\label{Polynomial}
In this section we prove a family of polynomial and symmetric function identities which
will be useful in our analysis of the rings $R_{n,k}$ and $S_{n,k}$.
The first of these identities is the $G_n$-analog of \cite[Lem. 3.1]{HRS}.
\begin{lemma}
\label{alternating-sum-lemma}
Let $k \leq n$,
let $\alpha_1, \dots, \alpha_k \in {\mathbb {C}}$ be distinct complex numbers, and let $\beta_1, \dots, \beta_n \in {\mathbb {C}}$
be complex numbers with the property that $\{\alpha_1, \dots, \alpha_k\} \subseteq \{\beta_1^r, \dots, \beta_n^r \}$.
For any $n-k+1 \leq s \leq n$ we have
\begin{equation}
\sum_{j = 0}^{s} (-1)^{j} e_{s-j}(\beta_1^r, \dots, \beta_n^r) h_j(\alpha_1, \dots, \alpha_k) = 0.
\end{equation}
\end{lemma}
\begin{proof}
The left-hand side is the coefficient of $t^s$ in the power series
\begin{equation}
\frac{\prod_{i = 1}^n (1 + t \beta_i^r)}{\prod_{i = 1}^k (1 + t \alpha_i)}.
\end{equation}
By assumption, every term in the denominator cancels with a distinct term in the numerator, so that this expression
is a polynomial in $t$ of degree $n-k$. Since $s > n-k$, the coefficient of $t^s$ in this polynomial is $0$.
\end{proof}
| 453 | 83,708 |
en
|
train
|
0.173.8
|
In practice, our applications of Lemma~\ref{alternating-sum-lemma} will always involve one of the two
situations $\{\beta_1^r, \dots, \beta_n^r\} = \{\alpha_1, \dots, \alpha_k\}$ or
$\{\beta_1^r, \dots, \beta_n^r\} = \{\alpha_1, \dots, \alpha_k, 0 \}$.
Let $\gamma = (\gamma_1, \dots, \gamma_n) \models n$ be a composition with $n$ parts.
The {\em Demazure character} $\kappa_{\gamma}({\mathbf {x}}_n) \in {\mathbb {C}}[{\mathbf {x}}_n]$ is defined recursively
as follows. If $\gamma_1 \geq \cdots \geq \gamma_n$, we let
$\kappa_{\gamma}({\mathbf {x}}_n)$ be the monomial
\begin{equation}
\kappa_{\gamma}({\mathbf {x}}_n) = x_1^{\gamma_1} \cdots x_n^{\gamma_n}.
\end{equation}
In general, if $\gamma_i < \gamma_{i+1}$, we let
\begin{equation}
\kappa_{\gamma}({\mathbf {x}}_n) = \frac{ x_i (\kappa_{\gamma'}({\mathbf {x}}_n)) - x_{i+1} (s_i \cdot \kappa_{\gamma'}({\mathbf {x}}_n))}{x_i - x_{i+1}},
\end{equation}
where $\gamma' = (\gamma_1, \dots, \gamma_{i+1}, \gamma_i, \dots, \gamma_n)$ is the
composition obtained by interchanging the $i^{th}$ and $(i+1)^{st}$ parts of $\gamma$
and $s_i \cdot \kappa_{\gamma'}({\mathbf {x}}_n)$ is the polynomial $\kappa_{\gamma'}({\mathbf {x}}_n)$ with $x_i$
and $x_{i+1}$ interchanged.
It can be shown that this recursion gives a well defined collection of polynomials
$\{ \kappa_{\gamma}({\mathbf {x}}_n) \}$ indexed by compositions $\gamma$ with $n$ parts.
This set forms a basis for the polynomial ring ${\mathbb {C}}[{\mathbf {x}}_n]$.
Demazure characters played a key role in \cite{HRS};
they will be equally important here.
In order to state the $G_n$-analogs of the lemmata from \cite{HRS} that we will need, we must
introduce some notation.
\begin{defn}
Let $S = \{s_1 < s_2 < \cdots < s_m\} \subseteq [n]$. The {\em skip monomial} ${\mathbf {x}}(S)$ in ${\mathbb {C}}[{\mathbf {x}}_n]$ is
\begin{equation*}
{\mathbf {x}}(S) := x_{s_1}^{s_1} x_{s_2}^{s_2 - 1} \cdots x_{s_m}^{s_m - m + 1}.
\end{equation*}
The {\em skip composition} $\gamma(S) = (\gamma_1, \dots, \gamma_n)$ is the length $n$ composition defined by
\begin{equation*}
\gamma_i = \begin{cases}
0 & i \notin S \\
s_j - j + 1 & i = s_j \in S.
\end{cases}
\end{equation*}
We also let $\overline{\gamma(S)} := (\gamma_n, \dots, \gamma_1)$ be the reverse of the skip composition $\gamma(S)$.
\end{defn}
For example, if $n = 8$ and $S = \{2,3,5,8\}$, then $\gamma(S) = (0,2,2,0,3,0,0,5)$ and
${\mathbf {x}}(S) = x_2^2 x_3^2 x_5^3 x_8^5$.
In general, we have that $\gamma(S)$ is the exponent vector of ${\mathbf {x}}(S)$.
We will be interested in the $r^{th}$ powers ${\mathbf {x}}(S)^r$ of skip monomials in this paper.
Skip monomials are related to Demazure characters as follows.
For any polynomial $f({\mathbf {x}}_n) = f(x_1, \dots, x_n) \in {\mathbb {C}}[{\mathbf {x}}_n]$, let
$f({\mathbf {x}}_n^r) = f(x_1^r, \dots, x_n^r)$ and $\overline{f({\mathbf {x}}_n^r)} = f(x_n^r, \dots, x_1^r)$.
The following result is immediate from
\cite[Lem. 3.5]{HRS} after the change of variables
$(x_1, \dots, x_n) \mapsto (x_1^r, \dots, x_n^r)$.
\begin{lemma}
\label{demazure-initial-term}
Let $n \geq k$ and let $S \subseteq [n]$ satisfy $|S| = n-k+1$. Let $<$ be lexicographic order. We have
\begin{equation}
{\mathrm {in}}_<(\overline{\kappa_{\overline{\gamma(S}}({\mathbf {x}}_n^r)}) = {\mathbf {x}}(S)^r.
\end{equation}
Moreover, for any $1 \leq i \leq n$ we have
\begin{equation}
x_i^{r \cdot (\max(S)-n+k+1)} \nmid m
\end{equation}
for any monomial $m$ appearing in $\overline{\kappa_{\overline{\gamma(S)}}({\mathbf {x}}_n^r)}$. Finally, if $T \subseteq [n]$
satisfies $|T| = n-k+1$ and $T \neq S$, then ${\mathbf {x}}(S)^r \nmid m$ for any monomial $m$ appearing in
$\overline{\kappa_{\overline{\gamma(T)}}({\mathbf {x}}_n^r)}$.
\end{lemma}
We also record the fact, which follows immediately from \cite{HRS}, that the polynomials
$\kappa_{\gamma(S)^*}({\mathbf {x}}_n^{r,*})$ appearing in Lemma~\ref{demazure-initial-term}
are contained in the ideals $I_{n,k}$ and $J_{n,k}$.
The following result follows from \cite[Eqn. 3.4]{HRS} after the change of variables
$(x_1, \dots, x_n) \mapsto (x_1^r, \dots, x_n^r)$.
\begin{lemma}
\label{demazures-in-ideal}
Let $n \geq k$ and let $S \subseteq [n]$ satisfy $|S| = n-k+1$. The polynomial
$\overline{\kappa_{\overline{\gamma(S)}}({\mathbf {x}}_n^r)}$ is contained in the ideal
\begin{equation}
\langle e_n({\mathbf {x}}_n^r), e_{n-1}({\mathbf {x}}_n^r), \dots, e_{n-k+1}({\mathbf {x}}_n^r) \rangle \subseteq {\mathbb {C}}[{\mathbf {x}}_n].
\end{equation}
In particular, we have $\overline{\kappa_{\overline{\gamma(S)}}({\mathbf {x}}_n^r)} \in I_{n,k}$ and
$\overline{\kappa_{\overline{\gamma(S)}}({\mathbf {x}}_n^{r})} \in J_{n,k}$.
\end{lemma}
We define two formal power series in the infinite variable set
${\mathbf {x}} = ({\mathbf {x}}^{(1)}, \dots, {\mathbf {x}}^{(r)})$ using the ${\mathrm {coinv}}$ and ${\mathrm {comaj}}$ statistics on
$r$-colored ordered multiset partitions.
If $\mu$ is an $r$-colored ordered multiset partition, let ${\mathbf {x}}^{\mu}$ be the monomial
in the variable set ${\mathbf {x}}$ where the exponent of $x_j^{(i)}$ is the number of occurrences
of $j^{i-1}$ in $\mu$.
\begin{defn}
\label{m-and-i}
Let $r \geq 1$ and let $k \leq n$ be positive integers. Define two formal power series in the variable set
${\mathbf {x}} = ({\mathbf {x}}^{(1)}, \dots, {\mathbf {x}}^{(r)})$ by
\begin{align}
\bm{M_{n,k}}({\mathbf {x}};q) &:= \sum_{\mu} q^{{\mathrm {maj}}(\mu)} {\mathbf {x}}^{\mu}, \\
\bm{I_{n,k}}({\mathbf {x}};q) &:= \sum_{\mu} q^{{\mathrm {coinv}}(\mu)} {\mathbf {x}}^{\mu},
\end{align}
where the sum is over all $r$-colored ordered multiset partitions $\mu$ of size $n$ with $k$ blocks.
\end{defn}
The next result establishes that
the formal power series $\bm{M_{n,k}}({\mathbf {x}};q), \bm{I_{n,k}}({\mathbf {x}};q)$ in Definition~\ref{m-and-i}
both contained in the ring $\Lambda^r({\mathbf {x}})$ and are related to each other by $q$-reversal.
\begin{lemma}
\label{m-equals-i}
Both of the formal power series $\bm{M_{n,k}}({\mathbf {x}};q)$ and $\bm{I_{n,k}}({\mathbf {x}};q)$
lie in the ring $\Lambda^r({\mathbf {x}})$. Moreover,
we have
$\bm{M_{n,k}}({\mathbf {x}};q) = {\mathrm {rev}}_q (\bm{I_{n,k}}({\mathbf {x}};q))$.
\end{lemma}
\begin{proof}
The truth of this statement for $r = 1$ (when $\Lambda^r({\mathbf {x}})$ is the usual
ring of symmetric functions) follows from the work of Wilson \cite{WMultiset}. To deduce this statement for
general $r \geq 1$, consider a new countably infinite set of variables
\begin{equation}
{\mathbf {z}} = \{z_{i,j} \,:\, j \in {\mathbb {Z}}_{> 0}, 1 \leq i \leq r \}.
\end{equation}
The association $z_{i,j} \leftrightarrow x_j^{(i)}$ gives a bijection with our collection of variables
${\mathbf {x}} = ({\mathbf {x}}^{(1)}, \dots, {\mathbf {x}}^{(r)})$. The idea is to reinterpret $\bm{M_{n,k}}({\mathbf {x}};q)$ and $\bm{I_{n,k}}({\mathbf {x}};q)$ in terms of the
new variable set ${\mathbf {z}}$, and then apply the equality and symmetry known in the case $r = 1$.
To achieve the program of the preceding paragraph, we introduce the following notation.
Let $\bm{M_{n,k}^1}({\mathbf {z}};q^r)$ be the formal power series
\begin{equation}
\bm{M_{n,k}^1}({\mathbf {z}};q^r) := \sum_{\mu} q^{r \cdot {\mathrm {maj}}(\mu)} {\mathbf {z}}^{\mu},
\end{equation}
where the sum is over all ordered multiset partitions $\mu$ of size $n$ with $k$ blocks
on the countably infinite alphabet
\begin{equation*}
1^{r-1} < 2^{r-1} < \cdots < 1^{r-2} < 2^{r-1} < \cdots < 1^0 < 2^0 < \cdots
\end{equation*}
and we compute ${\mathrm {maj}}(\mu)$ as in the $r = 1$ case (i.e., ignoring contributions to ${\mathrm {maj}}$ coming from colors,
and not multiplying descents by $r$).
Similarly, let $\bm{I^1_{n,k}}({\mathbf {z}};q^r)$ be the formal power series
\begin{equation}
\bm{I^1_{n,k}}({\mathbf {z}};q) := \sum_{\mu} q^{r \cdot {\mathrm {coinv}}(\mu)} {\mathbf {x}}^{\mu},
\end{equation}
where the sum is over all ordered multiset partitions $\mu$ of size $n$ with $k$ blocks
on the countably infinite alphabet
\begin{equation*}
1^{r-1} \prec \cdots \prec 1^0 \prec 2^{r-1} \prec \cdots \prec 2^0 \prec \cdots
\end{equation*}
and we define ${\mathrm {coinv}}(\mu)$ as in the $r = 1$ case (i.e., ignoring the contribution to
${\mathrm {coinv}}$ coming from colors, and not multiplying the number of coinversion pairs by $r$).
It follows from the definition of $\bm{M_{n,k}}({\mathbf {x}};q)$ that
\begin{equation}
\label{maj-relation}
\bm{M_{n,k}}({\mathbf {x}};q) =
\bm{M_{n,k}^1}({\mathbf {z}};q^r) |_{z_{i,j} = q^{i-1} \cdot x_j^{(i)}}.
\end{equation}
This expression for $\bm{M_{n,k}}({\mathbf {x}};q)$, together with the fact
that $\bm{M_{n,k}^1}({\mathbf {z}};q^r)$ is symmetric in the ${\mathbf {z}}$ variables, proves that
$\bm{M_{n,k}}({\mathbf {x}};q) \in \Lambda^r({\mathbf {x}})$.
Similarly, we have
\begin{equation}
\label{inv-relation}
\bm{I_{n,k}}({\mathbf {x}};q) =
\bm{I_{n,k}^1}({\mathbf {z}};q^r) |_{z_{i,j} = q^{r-i} \cdot x_j^{(i)}},
\end{equation}
so that $\bm{I_{n,k}}({\mathbf {x}};q) \in \Lambda^r({\mathbf {x}})$.
Applying the lemma in the case $r = 1$, we have
\begin{align}
\bm{M_{n,k}}({\mathbf {x}};q)
&= \bm{M_{n,k}^1}(z_{1,r}, z_{2,r}, \dots, z_{1,r-1}, z_{2,r-1}, \dots , z_{1,1}, z_{2,1}, \dots ;q^r)
|_{z_{i,j} = q^{i-1} \cdot x_j^{(i)}}
\\
&= \bm{M_{n,k}^1}(z_{1,r}, \dots, z_{1,1}, z_{2,r}, \dots, z_{2,1}, \dots; q^r) |_{z_{i,j} = q^{i-1} \cdot x_j^{(i)}} \\
&= {\mathrm {rev}}_q \left[ \bm{I_{n,k}^1}(z_{1,r}, \dots, z_{1,1}, z_{2,r}, \dots, z_{2,1}, \dots; q^r)
\right]|_{z_{i,j} = q^{i-1} \cdot x_j^{(i)}} \\
&= {\mathrm {rev}}_q \left[ \bm{I_{n,k}^1}(z_{1,r}, \dots, z_{1,1}, z_{2,r}, \dots, z_{2,1}, \dots; q^r)|_{z_{i,j} = q^{r-i} \cdot x_j^{(i)}}
\right] \\
&= {\mathrm {rev}}_q(\bm{I_{n,k}}({\mathbf {x}};q)).
\end{align}
The first equality is Equation~\ref{maj-relation}, the second equality
uses the fact that $\bm{M_{n,k}^1}({\mathbf {z}};q)$ is symmetric in the ${\mathbf {z}}$ variables, the third equality uses the fact that
$\bm{M_{n,k}^1}({\mathbf {z}};q) = {\mathrm {rev}}_q(\bm{I_{n,k}^1}({\mathbf {z}};q))$,
the fourth equality interchanges evaluation and $q$-reversal, and the final equality
is Equation~\ref{inv-relation}.
\end{proof}
| 4,052 | 83,708 |
en
|
train
|
0.173.9
|
The power series in Lemma~\ref{m-equals-i} will be (up to minor transformations) the graded Frobenius character
of the ring $S_{n,k}$.
We give this character-to-be a name.
\begin{defn}
Let $r \geq 1$ and let $k \leq n$ be positive integers. Let $\bm{D_{n,k}}({\mathbf {x}};q) \in \Lambda^r({\mathbf {x}})$ be the common
ring element
\begin{equation}
\bm{D_{n,k}}({\mathbf {x}};q) := ({\mathrm {rev}}_q \circ \omega) \bm{M_{n,k}}({\mathbf {x}};q) = \omega \bm{I_{n,k}}({\mathbf {x}};q).
\end{equation}
\end{defn}
As a Frobenius character, the ring element $\bm{D_{n,k}}({\mathbf {x}};q) \in \Lambda^r({\mathbf {x}})$ must expand positively in the Schur
basis $\{ \bm{s_{\lambda}}({\mathbf {x}}) \,:\, { \bm{\lambda} } \vdash_r n \}$. The ${\mathrm {maj}}$ formulation of $\bm{D_{n,k}}({\mathbf {x}};q)$
is well suited to proving this fact directly, as well as giving the Schur expansion of $\bm{D_{n,k}}({\mathbf {x}};q)$.
The following proposition is a colored version of a result of Wilson \cite[Thm. 5.0.1]{WMultiset}.
\begin{proposition}
\label{d-schur-expansion}
Let $r \geq 1$ and let $k \leq n$ be positive integers. We have the Schur expansion
\begin{equation}
\bm{D_{n,k}}({\mathbf {x}};q) = {\mathrm {rev}}_q \left[\sum_{{ \bm{T}} \in {\mathrm {SYT}}^r(n)} q^{{\mathrm {maj}}({ \bm{T}}) + r {n-k \choose 2} - r (n-k) {\mathrm {des}}({ \bm{T}})}
{{\mathrm {des}}({ \bm{T}}) \brack n-k}_{q^r} \bm{s_{{\mathrm {shape}}({ \bm{T}})'}}({\mathbf {x}}) \right].
\end{equation}
\end{proposition}
\begin{proof}
Consider the collection ${\mathcal{W}}_n$ of all length $n$ words $w = w_1 \dots w_n$ in the alphabet of
$r$-colored positive integers.
For any word $w \in {\mathcal{W}}_n$, the (colored version of the) {\em RSK correspondence} gives a pair of
$r$-tableaux $(\bm{U}, { \bm{T}})$ of the same shape, with $\bm{U}$ semistandard and ${ \bm{T}}$ standard.
For example, if $r = 3$ and $w = 2^0 1^1 4^1 2^2 1^0 2^0 2^1 1^2 \in {\mathcal{W}}_8$ then
$w \mapsto (\bm{U}, { \bm{T}})$ where
\begin{small}
\begin{equation*}
\bm{U} = \,
\begin{Young}
1 & 2 \\
2
\end{Young}, \hspace{0.1in}
\begin{Young}
1 & 2 \\
4
\end{Young}, \hspace{0.1in}
\begin{Young}
1 \\ 2
\end{Young} \hspace{0.3in} { \bm{T}} = \,
\begin{Young}
1 & 6 \\ 5
\end{Young}, \hspace{0.1in}
\begin{Young}
2 & 3 \\ 7 \end{Young}, \hspace{0.1in}
\begin{Young}
4 \\ 8
\end{Young} \, .
\end{equation*}
\end{small}
The RSK map gives a bijection
\begin{equation}
{\mathcal{W}}_n \xrightarrow{\sim} \left\{ (\bm{U}, { \bm{T}}) \,:\,
\begin{array}{c}
\text{$\bm{U}$ a semistandard $r$-tableau with $n$ boxes,} \\
\text{${ \bm{T}}$ a standard $r$-tableau with $n$ boxes,} \\
\text{${\mathrm {shape}}(\bm{U}) = {\mathrm {shape}}({ \bm{T}})$}
\end{array} \right\}.
\end{equation}
If $w \mapsto (\bm{U}, { \bm{T}})$, then ${\mathrm {Des}}(w) = {\mathrm {Des}}({ \bm{T}})$ so that ${\mathrm {maj}}(w) = {\mathrm {maj}}({ \bm{T}})$.
For any word $w \in {\mathcal{W}}_n$, we can generate a collection of ${{\mathrm {des}}(w) \choose n-k}$
$r$-colored ordered multiset partitions $\mu$ as follows.
Among the ${\mathrm {des}}(w)$ descents of $w$, choose $n-k$ of them to star, yielding a
pair $(w, S)$ where $S \subseteq {\mathrm {Des}}(w)$ satisfies $|S| = n-k$. We may identify $(w, S)$ with an
$r$-colored ordered multiset partition $\mu$.
The above paragraph
implies that
\begin{equation}
\label{first-m-equation}
\bm{M_{n,k}}({\mathbf {x}};q) = \sum_{w \in {\mathcal{W}}_n^r} q^{{\mathrm {maj}}(w) + r {n-k \choose 2} - r(n-k){\mathrm {des}}(w)}
{{\mathrm {des}}(w) \brack n-k}_{q^r} {\mathbf {x}}^w,
\end{equation}
where the factor $q^{r {n-k \choose 2} - r(n-k){\mathrm {des}}(w)} {{\mathrm {des}}(w) \brack n-k}_{q^r}$
is generated by the ways in which $n-k$ stars can be placed
among the ${\mathrm {des}}(w)$ descents of $w$.
Applying RSK to the right-hand side of Equation~\ref{first-m-equation}, we deduce that
\begin{equation}
\bm{M_{n,k}}({\mathbf {x}};q) = \sum_{{ \bm{T}} \in {\mathrm {SYT}}^r(n)} q^{{\mathrm {maj}}({ \bm{T}}) - r {n-k \choose 2} + r(n-k){\mathrm {des}}({ \bm{T}})}
{ {\mathrm {des}}({ \bm{T}}) \brack n-k}_{q^{r}} \bm{s_{{\mathrm {shape}}({ \bm{T}})}}({\mathbf {x}}).
\end{equation}
Since $\bm{D_{n,k}}({\mathbf {x}};q) = ({\mathrm {rev}}_q \circ \omega) \bm{M_{n,k}}({\mathbf {x}};q)$, we are done.
\end{proof}
Our basic tool for proving that $\bm{D_{n,k}}({\mathbf {x}};q) = {\mathrm {grFrob}}(S_{n,k};q)$ will be the following lemma,
which is a colored version of \cite[Lem. 3.6]{HRS}.
\begin{lemma}
\label{e-perp-lemma}
Let $\bm{F({\mathbf {x}})}, \bm{G({\mathbf {x}})} \in \Lambda^r({\mathbf {x}})$ have equal constant terms. Then
$\bm{F({\mathbf {x}})} = \bm{G({\mathbf {x}})}$ if and only if
$e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{F({\mathbf {x}})} = e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{G({\mathbf {x}})}$ for all $j \geq 1$ and $1 \leq i \leq r$.
\end{lemma}
\begin{proof}
The forward direction is obvious. For the reverse direction, let ${ \bm{\lambda} }$ be any $r$-partition, let
$j \geq 1$, and let $1 \leq i \leq r$. We have
\begin{align}
\langle \bm{F({\mathbf {x}})}, e_j({\mathbf {x}}^{(i^*)}) \bm{e_{{ \bm{\lambda} }}}({\mathbf {x}}) \rangle &=
\langle e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{F({\mathbf {x}})}, \bm{e_{{ \bm{\lambda} }}({\mathbf {x}})} \rangle \\
&= \langle e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{G({\mathbf {x}})}, \bm{e_{{ \bm{\lambda} }}({\mathbf {x}})} \rangle \\
&= \langle \bm{G({\mathbf {x}})}, e_j({\mathbf {x}}^{(i^*)}) \bm{e_{{ \bm{\lambda} }}}({\mathbf {x}}) \rangle.
\end{align}
Since $\langle \bm{F({\mathbf {x}})}, \bm{e_{\bm{\varnothing}}}({\mathbf {x}}) \rangle =
\langle \bm{G({\mathbf {x}})}, \bm{e_{\bm{\varnothing}}}({\mathbf {x}}) \rangle$
by assumption (where $\bm{\varnothing} = (\varnothing, \dots, \varnothing)$ is the empty $r$-partition),
this chain of equalities implies that
$\langle \bm{F({\mathbf {x}})}, \bm{e_{{ \bm{\lambda} }}}({\mathbf {x}}) \rangle =
\langle \bm{G({\mathbf {x}})}, \bm{e_{{ \bm{\lambda} }}({\mathbf {x}})} \rangle$ for any $r$-partition
${ \bm{\lambda} }$. We conclude that $\bm{F({\mathbf {x}})} = \bm{G({\mathbf {x}})}$.
\end{proof}
| 2,535 | 83,708 |
en
|
train
|
0.173.10
|
Our basic tool for proving that $\bm{D_{n,k}}({\mathbf {x}};q) = {\mathrm {grFrob}}(S_{n,k};q)$ will be the following lemma,
which is a colored version of \cite[Lem. 3.6]{HRS}.
\begin{lemma}
\label{e-perp-lemma}
Let $\bm{F({\mathbf {x}})}, \bm{G({\mathbf {x}})} \in \Lambda^r({\mathbf {x}})$ have equal constant terms. Then
$\bm{F({\mathbf {x}})} = \bm{G({\mathbf {x}})}$ if and only if
$e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{F({\mathbf {x}})} = e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{G({\mathbf {x}})}$ for all $j \geq 1$ and $1 \leq i \leq r$.
\end{lemma}
\begin{proof}
The forward direction is obvious. For the reverse direction, let ${ \bm{\lambda} }$ be any $r$-partition, let
$j \geq 1$, and let $1 \leq i \leq r$. We have
\begin{align}
\langle \bm{F({\mathbf {x}})}, e_j({\mathbf {x}}^{(i^*)}) \bm{e_{{ \bm{\lambda} }}}({\mathbf {x}}) \rangle &=
\langle e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{F({\mathbf {x}})}, \bm{e_{{ \bm{\lambda} }}({\mathbf {x}})} \rangle \\
&= \langle e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{G({\mathbf {x}})}, \bm{e_{{ \bm{\lambda} }}({\mathbf {x}})} \rangle \\
&= \langle \bm{G({\mathbf {x}})}, e_j({\mathbf {x}}^{(i^*)}) \bm{e_{{ \bm{\lambda} }}}({\mathbf {x}}) \rangle.
\end{align}
Since $\langle \bm{F({\mathbf {x}})}, \bm{e_{\bm{\varnothing}}}({\mathbf {x}}) \rangle =
\langle \bm{G({\mathbf {x}})}, \bm{e_{\bm{\varnothing}}}({\mathbf {x}}) \rangle$
by assumption (where $\bm{\varnothing} = (\varnothing, \dots, \varnothing)$ is the empty $r$-partition),
this chain of equalities implies that
$\langle \bm{F({\mathbf {x}})}, \bm{e_{{ \bm{\lambda} }}}({\mathbf {x}}) \rangle =
\langle \bm{G({\mathbf {x}})}, \bm{e_{{ \bm{\lambda} }}({\mathbf {x}})} \rangle$ for any $r$-partition
${ \bm{\lambda} }$. We conclude that $\bm{F({\mathbf {x}})} = \bm{G({\mathbf {x}})}$.
\end{proof}
We will show that $\bm{D_{n,k}}({\mathbf {x}};q)$ and ${\mathrm {grFrob}}(S_{n,k};q)$ satisfy the conditions of
Lemma~\ref{e-perp-lemma} by showing that their images under $e_j({\mathbf {x}}^{(i^*)})^{\perp}$
satisfy the same recursion.
The ${\mathrm {coinv}}$ formulation of $\bm{D_{n,k}}({\mathbf {x}};q)$ is best suited to calculating
$e_j({\mathbf {x}}^{(i^*)})^{\perp}$. The following lemma is a colored version of \cite[Lem. 3.7]{HRS}.
\begin{lemma}
\label{d-under-e-perp}
Let $r \geq 1$ and let $k \leq n$ be positive integers. Let $1 \leq i \leq r$ and let $j \geq 1$.
We have
\begin{equation}
e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{D_{n,k}}({\mathbf {x}};q) = q^{j \cdot (r-i) + r \cdot {j \choose 2}} {k \brack j}_{q^r}
\sum_{m = \max(1,k-j)}^{\min(k,n-j)} q^{r \cdot (k-m) \cdot (n-j-m)} {j \brack k-m}_{q^r} \bm{D_{n-j,m}}({\mathbf {x}};q).
\end{equation}
\end{lemma}
\begin{proof}
Applying $\omega$ to both sides of the purported identity, it suffices to prove
\begin{equation}
\label{h-equation}
h_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{I_{n,k}}({\mathbf {x}};q) =
q^{j \cdot (r-i) + r \cdot {j \choose 2}} {k \brack j}_{q^r}
\sum_{m = \max(1,k-j)}^{\min(k,n-j)} q^{r \cdot (k-m) \cdot (n-j-m)} {j \brack k-m}_{q^r} \bm{I_{n-j,m}}({\mathbf {x}};q).
\end{equation}
Since the bases $\{ \bm{h_{{ \bm{\lambda} }}}({\mathbf {x}}) \}$ and $\{ \bm{m_{{ \bm{\lambda} }^*}}({\mathbf {x}}) \}$ are dual bases
for $\Lambda^r({\mathbf {x}})$ under the Hall inner product, for any $\bm{F({\mathbf {x}})} \in \Lambda^r({\mathbf {x}})$
and any $r$-composition ${ \bm{\beta} }$, we have
\begin{equation}
\label{coefficient-extraction}
\langle \bm{F({\mathbf {x}})}, \bm{h_{{ \bm{\beta} }^*}}({\mathbf {x}}) \rangle = \text{coefficient of ${\mathbf {x}}^{{ \bm{\beta} }}$ in $\bm{F}({\mathbf {x}})$}.
\end{equation}
Equation~\ref{coefficient-extraction} is our tool for proving Equation~\ref{h-equation}.
Let ${ \bm{\beta} } = (\beta^{(1)}, \dots, \beta^{(r)})$ be an $r$-composition and consider the inner product
\begin{equation}
\label{h-inner-product}
\langle h_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{I_{n,k}}({\mathbf {x}};q), \bm{h_{{ \bm{\beta} }^{*}}}({\mathbf {x}}) \rangle =
\langle \bm{I_{n,k}}({\mathbf {x}};q), h_j({\mathbf {x}}^{(i^*)}) \bm{h_{{ \bm{\beta} }^*}}({\mathbf {x}}) \rangle.
\end{equation}
We may write $h_j({\mathbf {x}}^{(i^*)}) \bm{h_{{ \bm{\beta} }^*}}({\mathbf {x}}) = \bm{h_{\bm{\widehat{\beta}}^*}}({\mathbf {x}})$, where
\begin{itemize}
\item
$\bm{\widehat{\beta}} = (\beta^{(1)}, \dots, \widehat{\beta}^{(i)}, \dots, \beta^{(r)})$ is an $r$-composition which agrees with
${ \bm{\beta} }$ in every component except for $i$, and
\item
$\widehat{\beta}^{(i)} = (\beta^{(i)}_1, \beta^{(i)}_2, \dots, 0 ,\dots, 0, j)$,
where the composition $\widehat{\beta}^{(i)}$ has $N$ parts for some positive
integer $N$ larger than the number of parts in any of $\beta^{(1)}, \dots, \beta^{(r)}$.
\end{itemize}
By Equation~\ref{coefficient-extraction}, we can interpret
$\langle \bm{I_{n,k}}({\mathbf {x}}), h_j({\mathbf {x}}^{(i^*)}) \bm{h_{{ \bm{\beta} }^*}}({\mathbf {x}}) \rangle =
\langle \bm{I_{n,k}}({\mathbf {x}}), \bm{h_{\bm{\widehat{\beta}}^*}}({\mathbf {x}}) \rangle$
combinatorially.
For any $r$-composition $\bm{\alpha} = (\alpha^{(1)}, \dots, \alpha^{(r)})$,
let ${\mathcal{OP}}_{\bm{\alpha},k}$ be the collection of $r$-colored ordered multiset partitions with $k$ blocks which contain
$\alpha^{(i)}_j$ copies of the letter $j^{i-1}$. Equation~\ref{coefficient-extraction} implies
\begin{equation}
\label{combinatorial-coefficient-extraction}
\langle \bm{I_{n,k}}({\mathbf {x}}), \bm{h_{\bm{\widehat{\beta}}^*}}({\mathbf {x}}) \rangle =
\sum_{\mu \in {\mathcal{OP}}_{\bm{\widehat{\beta}},k}} q^{{\mathrm {coinv}}(\mu)}.
\end{equation}
Let us analyze the right-hand side of Equation~\ref{combinatorial-coefficient-extraction}.
A typical element $\mu \in {\mathcal{OP}}_{\bm{\widehat{\beta}},k}$ contains $j$ copies of the {\em big letter} $N^{i-1}$, together
with various other {\em small letters}.
Recall that the statistic ${\mathrm {coinv}}$ is defined using the order $\prec$, which prioritizes letter value over color.
Our choice of $N$ guarantees that every small letter is $\prec N^{i-1}$.
We have a map
\begin{equation}
\varphi: {\mathcal{OP}}_{\bm{\widehat{\beta}},k} \rightarrow \bigcup_{m = \max(1,k-j)}^{\min(k,n-j)} {\mathcal{OP}}_{{ \bm{\beta} },m},
\end{equation}
where $\varphi(\mu)$ is the $r$-colored ordered multiset partition obtained by erasing all $j$
of the big letters $N^{i-1}$
in $\mu$ (together with any singleton blocks $\{N^{i-1}\}$). Let us analyze the effect of $\varphi$ on ${\mathrm {coinv}}$.
Fix $m$ in the range $\max(1,k-j) \leq m \leq \min(k,n-j)$ and let $\mu \in {\mathcal{OP}}_{{ \bm{\beta} },m}$.
Then any $\mu' \in \varphi^{-1}(\mu)$ is obtained by adding $j$ copies of the big letter $N^{i-1}$
to $\mu$, precisely $k-m$ of which must be added in singleton blocks.
We calculate $\sum_{\mu' \in \varphi^{-1}(\mu)} q^{{\mathrm {coinv}}(\mu')}$ in terms of ${\mathrm {coinv}}(\mu)$ as follows.
Following the notation of the proof of \cite[Lem. 3.7]{HRS},
let us call a big letter $N^{i-1}$ {\em minb} if it is $\prec$-minimal in its block and {\em nminb} if it
is not $\prec$-minimal in its block. Similarly, let us call a small letter {\em mins} or {\em nmins} depending
on whether it is minimal in its block. The contributions to $\sum_{\mu' \in \varphi^{-1}(\mu)} q^{{\mathrm {coinv}}(\mu')}$
coming from big letters are as follows.
\begin{itemize}
\item The $j$ big letters $N^{i-1}$ give a complementary color contribution of $j \cdot (r-i)$ to ${\mathrm {coinv}}$.
\item Each of the $minb$ letters forms a coinversion pair with every $nmins$ letter. Since there are $k-m$
$minb$ letters and $n-j-m$ $nmins$ letters, this contributes $r(k-m)(n-j-m)$ to ${\mathrm {coinv}}$.
\item Each of the $minb$ letters forms a coinversion pair with every $nminb$ letter (for a total of
$(k-m)(j-k+m)$ coinversion pairs) as well each $minb$ letter to its left (for a total of ${k-m \choose 2}$ coinversion pairs.
This contributes $r \cdot [ (k-m)(j-k+m) + {k-m \choose 2} ]$ to ${\mathrm {coinv}}$.
\item Each $minb$ letter forms a coinversion pair with each $mins$ letter to its left. If we sum over the ${k \choose k-m}$
ways of interleaving the singleton blocks $\{N^{i-1}\}$ within the blocks of $\mu$, this gives rise to a factor of
${k \brack k-m}_{q^r}$.
\item Each $nminb$ letter forms a coinversion pair with each $mins$ letter to its left. If we consider the ${m \choose j-k+m}$
ways to augment the $m$ blocks of $\mu$ with a $nminb$ letter, this gives rise to a factor of
$q^{{j-k+m \choose 2}} {m \brack j-k+m}_{q^r}$.
\end{itemize}
Applying the identity
\begin{equation}
r \cdot \left[(k-m)(j-k-m) + {k-m \choose 2} + {j-k+m \choose 2} \right] = r \cdot {j \choose 2},
\end{equation}
we see that
\begin{align}
\sum_{\mu' \in \varphi^{-1}(\mu)} q^{{\mathrm {coinv}}(\mu')} &=
q^{j \cdot (r-i) + r \cdot {j \choose 2} + r \cdot (k-m)(n-j-m)} {k \brack k-m}_{q^r} {m \brack j-k+m}_{q^r} q^{{\mathrm {coinv}}(\mu)} \\
&= q^{j \cdot (r-i) + r \cdot {j \choose 2} + r \cdot (k-m)(n-j-m)} {k \brack j}_{q^r} {j \brack k-m}_{q^r} q^{{\mathrm {coinv}}(\mu)}.
\end{align}
If we sum this expression over all $\mu \in {\mathcal{OP}}_{{ \bm{\beta} },m}$, and then sum over $m$, we get
\begin{equation}
\label{big-expression-h}
q^{j \cdot (r-i) + r \cdot {j \choose 2}} {k \brack j}_{q^r} \sum_{m = \max(1,k-j)}^{\min(k,n-j)}
q^{r \cdot (k-m)(n-j-m)} {j \brack k-m}_{q^r} \sum_{\mu \in {\mathcal{OP}}_{{ \bm{\beta} },m}} q^{{\mathrm {coinv}}(\mu)}.
\end{equation}
However, thanks to Equation~\ref{coefficient-extraction} and the definition of the $\bm{I}$-functions,
the expression (\ref{big-expression-h}) is also equal to
\begin{equation}
\left\langle q^{j \cdot (r-i) + r \cdot {j \choose 2}} {k \brack j}_{q^r}
\sum_{m = \max(1,k-j)}^{\min(k,n-j)} q^{r \cdot (k-m) \cdot (n-j-m)} {j \brack k-m}_{q^r} \bm{I_{n-j,m}}({\mathbf {x}};q),
\bm{h_{{ \bm{\beta} }^*}}({\mathbf {x}}) \right\rangle.
\end{equation}
Since both sides of the equation in the statement of the lemma have the same pairing under $\langle \cdot, \cdot \rangle$
with $\bm{h_{{ \bm{\beta} }^*}}({\mathbf {x}})$ for any $r$-composition ${ \bm{\beta} }$, we are done.
\end{proof}
| 3,989 | 83,708 |
en
|
train
|
0.173.11
|
\section{Hilbert series and standard monomial basis}
\label{Hilbert}
\subsection{The point sets $Y_{n,k}^r$ and $Z_{n,k}^r$}
In this section we derive the Hilbert series of $R_{n,k}$ and $S_{n,k}$.
We also prove that, as ungraded $G_n$-modules, we have
$R_{n,k} \cong {\mathbb {C}}[{\mathcal{F}}_{n,k}]$ and $S_{n,k} \cong {\mathbb {C}}[{\mathcal{OP}}_{n,k}]$.
To do this, we will use a general method dating back to Garsia and Procesi \cite{GP}
in the context of the Tanisaki ideal.
We recall the method, and then apply it to our situation.
For any finite point set $Y \subset {\mathbb {C}}^n$, let ${\mathbf {I}}(Y) \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ be the ideal of polynomials which vanish on
$Y$. That is, we have
\begin{equation}
{\mathbf {I}}(Y) := \{ f \in {\mathbb {C}}[{\mathbf {x}}_n] \,:\, f({\mathbf {y}}) = 0 \text{ for all ${\mathbf {y}} \in Y$} \}.
\end{equation}
We can identify the quotient ${\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {I}}(Y)$ with the ${\mathbb {C}}$-vector space of functions
$Y \rightarrow {\mathbb {C}}$. In particular
\begin{equation}
\dim ({\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {I}}(Y)) = |Y|.
\end{equation}
If $W \subseteq GL_n({\mathbb {C}})$ is a finite subgroup and $Y$ is stable under the action of $W$, we have
\begin{equation}
{\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {I}}(Y) \cong_W {\mathbb {C}}[Y]
\end{equation}
as $W$-modules, where we used the fact that the permutation module $Y$ is self-dual.
The ideal ${\mathbf {I}}(Y)$ is almost never homogeneous. To get a homogeneous ideal, we proceed as follows.
If $f \in {\mathbb {C}}[{\mathbf {x}}_n]$ is any nonzero polynomial of degree $d$, write $f = f_d + f_{d-1} + \cdots + f_0$, where $f_i$ is
homogeneous of degree $i$. Define $\tau(f) := f_d$ and define a homogeneous ideal ${\mathbf {T}}(Y) \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ by
\begin{equation}
{\mathbf {T}}(Y) := \langle \tau(f) \,:\, f \in {\mathbf {I}}(Y) - \{0\} \rangle.
\end{equation}
The passage from ${\mathbf {I}}(Y)$ to ${\mathbf {T}}(Y)$ does not affect the $W$-module structure (or vector space dimension)
of the quotient:
\begin{equation}
{\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {T}}(Y) \cong_W{\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {I}}(Y) \cong_W {\mathbb {C}}[Y].
\end{equation}
Our strategy, whose $r = 1$ avatar was accomplished in \cite{HRS}, is as follows.
\begin{enumerate}
\item Find finite point sets $Y_{n,k}, Z_{n,k} \subset {\mathbb {C}}^n$
which are stable under the action of $G_n$
such that there are equivariant bijections $Y_{n,k} \cong {\mathcal{F}}_{n,k}$ and $Z_{n,k} \cong {\mathcal{OP}}_{n,k}$.
\item Prove that $I_{n,k} \subseteq {\mathbf {T}}(Y_{n,k})$ and
$J_{n,k} \subseteq {\mathbf {T}}(Z_{n,k})$ by showing that the generators of the ideals $I_{n,k}, J_{n,k}$ arise as top
degree components of polynomials vanishing on $Y_{n,k}, Z_{n,k}$ (respectively).
\item Use Gr\"obner theory to prove
\begin{equation*}
\dim(R_{n,k}) = \dim \left( {\mathbb {C}}[{\mathbf {x}}_n]/I_{n,k} \right) \leq | {\mathcal{F}}_{n,k} | =
\dim \left( {\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {T}}(Y_{n,k}) \right)
\end{equation*}
and
\begin{equation*}
\dim(S_{n,k}) = \dim \left( {\mathbb {C}}[{\mathbf {x}}_n]/J_{n,k} \right) \leq | {\mathcal{OP}}_{n,k} | =
\dim \left( {\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {T}}(Z_{n,k}) \right).
\end{equation*}
Step 2 then implies $I_{n,k} = {\mathbf {T}}(Y_{n,k})$ and $J_{n,k} = {\mathbf {T}}(Z_{n,k})$.
\end{enumerate}
To accomplish Step 1 of this program, we introduce the following point sets.
\begin{defn}
Fix $k$ distinct positive real numbers $0 < \alpha_1 < \cdots < \alpha_k$.
Let $Y_{n,k} \subset {\mathbb {C}}^n$ be the
set of points $(y_1, \dots, y_n)$ such that
\begin{itemize}
\item we have $y_i = 0$ or $y_i \in \{ \zeta^c \alpha_j \,:\, 0 \leq c \leq r-1, 1 \leq j \leq k \}$ for all $i$, and
\item we have $\{\alpha_1, \dots, \alpha_k\} \subseteq \{ |y_1|, \dots, |y_n| \}$.
\end{itemize}
Let $Z_{n,k} \subseteq {\mathbb {C}}^n$ be the set of points in $Y_{n,k}$ whose coordinates do not vanish:
\begin{equation*}
Z_{n,k} := \{ (y_1, \dots, y_n) \in Y_{n,k} \,:\, y_i \neq 0 \text{ for all $i$.} \}.
\end{equation*}
\end{defn}
There is a bijection $\varphi: {\mathcal{F}}_{n,k} \rightarrow Y_{n,k}$ given as follows. Let
$\sigma = (Z \mid B_1 \mid \cdots \mid B_k) \in {\mathcal{F}}_{n,k}$ be an $G_n$-face of dimension $k$, whose
zero block $Z$ may be empty. The point $\varphi(\sigma) = (y_1, \dots, y_n)$ has coordinates given by
\begin{equation}
y_i =
\begin{cases}
0 & \text{if $i \in Z$,} \\
\zeta^c \alpha_j & \text{if $i\in B_j$ and $i$ has color $c$.}
\end{cases}
\end{equation}
For example if $r = 3$ then
\begin{equation*}
\varphi:
( 25 \mid 3^0 \mid 1^0 4^2 6^2) \mapsto
(\zeta^0 \alpha_2, 0, \zeta^0 \alpha_1, \zeta^2 \alpha_2, 0, \zeta^2 \alpha_2).
\end{equation*}
The set $Y_{n,k}$ is closed under the action of $G_n$ and the map $\varphi$ commutes
with the action of $G_n$. It follows that $Y_{n,k} \cong {\mathcal{F}}_{n,k}$ as
$G_n$-sets. Moreover, the bijection $\varphi$ restricts to show that
$Z_{n,k} \cong {\mathcal{OP}}_{n,k}$ as $G_n$-sets.
This accomplishes Step 1 of our program.
Step 2 of our program is accomplished by appropriate modifications of \cite[Sec. 4]{HRS}.
\begin{lemma}
\label{i-contained-in-t}
We have $I_{n,k} \subseteq {\mathbf {T}}(Y_{n,k})$ and $J_{n,k} \subseteq {\mathbf {T}}(Z_{n,k})$.
\end{lemma}
\begin{proof}
We will show that every generator of $I_{n,k}$ (resp. $J_{n,k}$)
is the top degree component of some polynomial in ${\mathbf {I}}(Y_{n,k})$ (resp. ${\mathbf {I}}(Z_{n,k})$).
Let $1 \leq i \leq n$.
It is clear that $x_i (x_i^r - \alpha_1^r) \cdots (x_i^r - \alpha_k^r) \in {\mathbf {I}}(Y_{n,k})$. Taking the highest
component, we have $x_i^{kr+1} \in {\mathbf {T}}(Y_{n,k})$. Similarly, the polynomial
$(x_i^r - \alpha_1^r) \cdots (x_i^r - \alpha_k^r)$ vanishes on $Z_{n,k}$, so that
$x_i^{kr} \in {\mathbf {T}}(Z_{n,k})$.
Lemma~\ref{alternating-sum-lemma} applies to show
$e_{n-k+1}({\mathbf {x}}_n^r), \dots, e_n({\mathbf {x}}_n^r) \in {\mathbf {T}}(Y_{n,k})$ and
$e_{n-k+1}({\mathbf {x}}_n^r), \dots, e_n({\mathbf {x}}_n^r) \in {\mathbf {T}}(Z_{n,k})$.
\end{proof}
| 2,502 | 83,708 |
en
|
train
|
0.173.12
|
\subsection{Skip monomials and initial terms}
Step 3 of our program takes more work. We begin by isolating certain monomials in the
initial ideals of $I_{n,k}$ and $J_{n,k}$.
\begin{lemma}
\label{skip-leading-terms}
Let $<$ be the lexicographic order on monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$.
\begin{itemize}
\item
For any $1 \leq i \leq n$ we have
$x_i^{kr+1} \in {\mathrm {in}}_<(I_{n,k})$ and $x_i^{kr} \in {\mathrm {in}}_<(J_{n,k})$.
\item
If $S \subseteq [n]$ satisfies $|S| = n-k+1$, we also have
${\mathbf {x}}(S)^r \in {\mathrm {in}}_<(I_{n,k})$ and ${\mathbf {x}}(S)^r \in {\mathrm {in}}_<(J_{n,k})$.
\end{itemize}
\end{lemma}
\begin{proof}
The first claim follows from the fact that $x_i^{kr+1}$ is a generator of $I_{n,k}$ and
$x_i^{kr}$ is a generator of $J_{n,k}$. The second claim is a consequence of Lemma~\ref{demazure-initial-term}
and Lemma~\ref{demazures-in-ideal}.
\end{proof}
It will turn out that the monomials given in Lemma~\ref{skip-leading-terms} will suffice to generate
${\mathrm {in}}_<(I_{n,k})$ and ${\mathrm {in}}_<(J_{n,k})$. The next definition gives the family of monomials which
are not divisible by any of the monomials in Lemma~\ref{skip-leading-terms} a name.
\begin{defn}
A monomial $m \in {\mathbb {C}}[{\mathbf {x}}_n]$ is {\em $(n,k)$-nonskip} if
\begin{itemize}
\item $x_i^{kr+1} \nmid m$ for $1 \leq i \leq n$, and
\item ${\mathbf {x}}(S)^r \nmid m$ for all $S \subseteq [n]$ with $|S| = n-k+1$.
\end{itemize}
Let ${\mathcal{M}}_{n,k}$ denote the collection of all $(n,k,r)$-nonskip monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$.
An $(n,k)$-nonskip monomial $m \in {\mathcal{M}}_{n,k}$ is called {\em strongly $(n,k)$-nonskip} if we have
$x_i^{kr} \nmid m$ for all $1 \leq i \leq n$. Let ${\mathbb{N}}N_{n,k}$ denote the collection of strongly
$(n,k)$-nonskip monomials.
\end{defn}
We will describe a bijection $\Psi: {\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$ which restricts to a bijection
${\mathcal{OP}}_{n,k} \rightarrow {\mathbb{N}}N_{n,k}$.
The bijection $\Psi$ will be constructed recursively, so that $\Psi(\sigma)$ will be determined by $\Psi(\overline{\sigma})$,
where $\overline{\sigma}$ is the $G_{n-1}$-face obtained from $\sigma$ by deleting the largest letter $n$.
The recursive procedure which gives the derivation $\Psi(\overline{\sigma}) \mapsto \Psi(\sigma)$ will rely
on the following lemmata involving skip monomials. The first of these is an extension of
\cite[Lem. 4.5]{HRS}.
\begin{lemma}
\label{skip-monomial-union}
Let $m \in {\mathbb {C}}[{\mathbf {x}}_n]$ be a monomial and let $S, T \subseteq [n]$ be subsets. If ${\mathbf {x}}(S)^r \mid m$
and ${\mathbf {x}}(T)^r \mid m$, then ${\mathbf {x}}(S \cup T)^r \mid m$.
\end{lemma}
\begin{proof}
Given $i \in S$, it follows from the definition of skip monomials that the exponent of $x_i$ in ${\mathbf {x}}(S \cup T)^r$
is $\leq$ the exponent of $x_i$ in ${\mathbf {x}}(S)^r$. A similar observation holds for $i \in T$. The claimed divisibility follows.
\end{proof}
The following result is an immediate consequence of Lemma~\ref{skip-monomial-union}; it extends
\cite[Lem. 4.6]{HRS}.
\begin{lemma}
\label{skip-monomial-unique}
Let $m \in {\mathbb {C}}[{\mathbf {x}}_n]$ be a monomial and let $\ell$ be the largest integer such that there exists a subset
$S \subseteq [n]$ with $|S| = \ell$ and ${\mathbf {x}}(S)^r \mid m$. Then there exists {\em a unique} subset $S \subseteq [n]$
with $|S| = \ell$ and ${\mathbf {x}}(S)^r \mid m$.
\end{lemma}
\begin{proof}
If there were two such sets $S, S'$ then by Lemma~\ref{skip-monomial-union} we would have
${\mathbf {x}}(S \cup S')^r \mid m$, contradicting the definition of $\ell$.
\end{proof}
Given any subset $S \subseteq [n]$, let ${\mathbf {m}}(S) := \prod_{i \in S} x_i$ be the
corresponding squarefree monomial.
For example, we have ${\mathbf {m}}(245) = x_2 x_4 x_5$.
We have the following lemma involving the $r^{th}$ power
${\mathbf {m}}(S)^r$ of ${\mathbf {m}}(S)$. This is the extension of \cite[Lem. 4.7]{HRS}.
\begin{lemma}
\label{skip-monomial-multiply}
Let $m \in {\mathcal{M}}_{n,k}$ be an $(n,k)$-nonskip monomial. There exists a unique set $S \subseteq [n]$ with
$|S| = n-k$ such that
\begin{enumerate}
\item ${\mathbf {x}}(S)^r \mid ( {\mathbf {m}}(S)^r \cdot m)$, and
\item ${\mathbf {x}}(U)^r \nmid ( {\mathbf {m}}(S)^r \cdot m)$ for all $U \subseteq [n]$ with $|U| = n-k+1$.
\end{enumerate}
\end{lemma}
\begin{proof}
We begin with uniqueness. Suppose $S = \{s_1 < \cdots < s_{n-k} \}$ and $T = \{t_1 < \cdots < t_{n-k} \}$ were
two such sets. Let $\ell$ be such that $s_1 = t_1, \dots, s_{\ell-1} = t_{\ell-1}$, and $s_{\ell} \neq t_{\ell}$;
without loss of generality we have $s_{\ell} < t_{\ell}$.
Define a new set $U$ by $U := \{s_1 < \cdots < s_{\ell} < t_{\ell} < t_{\ell + 1} < \cdots < t_{n-k} \}$, so that
$|U| = n-k+1$.
Since ${\mathbf {x}}(S)^r \mid ({\mathbf {m}}(S)^r \cdot m)$ and ${\mathbf {x}}(T)^r \mid ({\mathbf {m}}(T)^r \cdot m)$, we have
${\mathbf {x}}(U)^r \mid ({\mathbf {m}}(S)^r \cdot m)$, which is a contradiction.
To prove existence, consider the following collection ${\mathbb {C}}C$ of subsets of $[n]$:
\begin{equation}
{\mathbb {C}}C := \{ S \subseteq [n] \,:\, |S| = n-k \text{ and } {\mathbf {x}}(S)^r \mid ({\mathbf {m}}(S)^r \cdot m) \}.
\end{equation}
The collection ${\mathbb {C}}C$ is nonempty; indeed, we have $\{1, 2, \dots, n-k\} \in {\mathbb {C}}C$. Let $S_0 \in {\mathbb {C}}C$
be the lexicographically {\em final} set in ${\mathbb {C}}C$; we argue that ${\mathbf {m}}(S_0)^r \cdot m$ satisfies
Condition 2 of the statement of the lemma, thus finishing the proof.
Let $U \subseteq [n]$ have size $|U| = n-k+1$ and suppose ${\mathbf {x}}(U)^r \mid ({\mathbf {m}}(S_0)^r \cdot m)$.
If there
were an element $u \in U$ with $u < \min(S_0)$, then we would have ${\mathbf {x}}(S_0 \cup \{u\})^r \mid m$,
which contradicts the assumption $m \in {\mathcal{M}}_{n,k}$. Since $|U| > |S_0|$, there exists an element
$u_0 \in U - S_0$ with $u_0 > \min(S_0)$. Write the union $S_0 \cup \{u_0\}$ as
\begin{equation}
S_0 \cup \{u_0\} = \{s_1 < \cdots < s_j < u_0 < s_{j+1} < \cdots < s_{n-k} \},
\end{equation}
where $j \geq 1$. Define a new set $S_0'$ by
\begin{equation}
S_0' := \{s_1 < \cdots < s_{j-1} < u_0 < s_{j+1} < \cdots < s_{n-k} \}.
\end{equation}
Then $S_0'$ comes after $S_0$ in lexicographic order but we have $S_0' \in {\mathbb {C}}C$, contradicting our choice
of $S_0$.
\end{proof}
To see how Lemma~\ref{skip-monomial-multiply} works, consider the case
$(n,k,r) = (5,2,3)$ and $m = x_1^2 x_2^6 x_3^3 x_4^3 x_5^6 \in {\mathcal{M}}_{5,2}$.
The collection ${\mathbb {C}}C$ of sets
\begin{equation*}
{\mathbb {C}}C = \{ S \subseteq [5] \,:\, |S| = 3 \text{ and } {\mathbf {x}}(S)^3 \mid ({\mathbf {m}}(S)^3 \cdot m) \}
\end{equation*}
is given by
\begin{equation*}
{\mathbb {C}}C = \{123, 124, 125, 234, 235 \}.
\end{equation*}
However, we have
\begin{align*}
&{\mathbf {x}}(1234)^3 \mid ({\mathbf {m}}(123)^3 \cdot m),
&{\mathbf {x}}(1234)^3 \mid ({\mathbf {m}}(124)^3 \cdot m), \\
&{\mathbf {x}}(1235)^3 \mid ({\mathbf {m}}(125)^3 \cdot m),
&{\mathbf {x}}(2345)^3 \mid ({\mathbf {m}}(234)^3 \cdot m).
\end{align*}
On the other hand, if $S \subseteq [5]$ and $|S| = 4$, then ${\mathbf {x}}(S)^3 \nmid ({\mathbf {m}}(235)^3 \cdot m)$.
Observe that $235$ is the lexicographically final set in ${\mathbb {C}}C$.
| 2,918 | 83,708 |
en
|
train
|
0.173.13
|
\subsection{The bijection $\Psi$}
We describe a bijection $\Psi: {\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$ which restricts
to a bijection ${\mathcal{OP}}_{n,k} \rightarrow {\mathbb{N}}N_{n,k}$ with the property that
${\mathrm {coinv}}(\sigma) = \deg(\Psi(\sigma))$ for any $G_n$-face $\sigma \in {\mathcal{F}}_{n,k}$.
The construction of $\Psi$ will be recursive in the parameter $n$.
If $n = 1$ and $k = 1$,
the relation ${\mathrm {coinv}}(\sigma) = \deg(\Psi(\sigma))$ determines the bijection $\Psi$ uniquely. Explicitly,
the map $\Psi: {\mathcal{F}}_{1,1} \rightarrow {\mathcal{M}}_{1,1}$ is defined by
\begin{equation}
\Psi: (1^c) \mapsto x_1^{r-c-1},
\end{equation}
for any color $0 \leq c \leq r-1$.
If $n = 1$ and $k = 0$ then ${\mathcal{F}}_{1,0}$ consists of the sole face $(1)$.
On the other hand, the collection ${\mathcal{M}}_{1,0}$ of nonskip monomials
consists of the sole monomial $1$.
We are forced to define
\begin{equation}
\Psi: (1) \mapsto 1.
\end{equation}
The combinatorial recursion on which $\Psi$ is based is as follows.
Let $\sigma = (B_1 \mid \cdots \mid B_{\ell}) \in {\mathcal{F}}_{n,k}$
be an $G_n$-face of dimension $k$, so that $\ell = k+1$ or $\ell = k$ according to whether $\sigma$
has a zero block.
Suppose we wish to build a larger face
by inserting $n+1$ into $\sigma$. There are three ways in which this can be done.
\begin{enumerate}
\item We could perform a {\em star insertion} by inserting $n+1$ into one of the
nonzero blocks $B_{\ell - j}$ of $\sigma$ for $1 \leq j \leq k$
also assigning a color $c$ to $n+1$. The resulting $G_n$-face would be
$(B_1 \mid \cdots \mid B_{\ell - j} \cup \{(n+1)^c\} \mid \cdots \mid B_{\ell})$. This leaves
the dimension $k$ unchanged and increases ${\mathrm {coinv}}$
by $r \cdot (k - j) + (r - c - 1)$.
For example, if $r = 2$ and $\sigma = (3 \mid 2^1 4^0 \mid 1^1) \in {\mathcal{F}}_{4,2}$, the possible star insertions of
$5$ and their effects on ${\mathrm {coinv}}$ are
\begin{center}
$\begin{array}{cccc}
(3 \mid 2^1 4^0 5^1 \mid 1^1) & (3 \mid 2^1 4^0 5^0 \mid 1^1 ) & (3 \mid 2^1 4^0 \mid 1^1 5^1) &
(3 \mid 2^1 4^0 \mid 1^1 5^0) \\
{\mathrm {coinv}} + 0 & {\mathrm {coinv}} + 1 & {\mathrm {coinv}} + 2 & {\mathrm {coinv}} + 3.
\end{array}$
\end{center}
\item We could perform a {\em zero insertion} by inserting $n+1$ into the zero block of $\sigma$ (or by creating
a new zero block whose sole element is $n+1$). This leaves the dimension $k$ unchanged and increases
${\mathrm {coinv}}$ by $kr$.
For example, if $r = 2$ and $\sigma = (3 \mid 2^1 4^0 \mid 1^1) \in {\mathcal{F}}_{4,2}$, the zero insertion of $5$ would
yield $(35 \mid 2^1 4^0 \mid 1^1)$, adding $4$ to ${\mathrm {coinv}}$.
\item We could perform a {\em bar insertion} by inserting $n+1$ into a new singleton nonzero block of $\sigma$ just
after the block $B_{\ell - j}$ for some $0 \leq j \leq k$,
also assigning a color $c$ to $n+1$. The resulting $G_n$-face would be
$(B_1 \mid \cdots \mid B_{\ell - j}
\mid (n+1)^c \mid B_{\ell - j + 1} \mid \cdots \mid B_{\ell})$. This increases the dimension $k$ by one
and increases ${\mathrm {coinv}}$ by $r \cdot (n-k) + r \cdot (k-j) + (r-c-1)$.
For example, if $r = 2$ and $\sigma = (3 \mid 2^1 4^0 \mid 1^1) \in {\mathcal{F}}_{4,2}$, the possible bar insertions of
$5$ and their effects on ${\mathrm {coinv}}$ are
\begin{center}
$\begin{array}{ccc}
(3 \mid 5^1 \mid 2^1 4^0 \mid 1^1) &
(3 \mid 5^0 \mid 2^1 4^0 \mid 1^1) &
(3 \mid 2^1 4^0 \mid 5^1 \mid 1^1) \\
{\mathrm {coinv}} + 4 & {\mathrm {coinv}} + 5 & {\mathrm {coinv}} + 6 \\ \\
(3 \mid 2^1 4^0 \mid 5^0 \mid 1^1) &
(3 \mid 2^1 4^0 \mid 1^1 \mid 5^1) &
(3 \mid 2^1 4^0 \mid 1^1 \mid 5^0) \\
{\mathrm {coinv}} + 7 & {\mathrm {coinv}} + 8 & {\mathrm {coinv}} + 9.
\end{array}$
\end{center}
\end{enumerate}
The names of these three kinds of insertions come from our combinatorial models for $G_n$-faces; a star insertion adds
a star to the star model of
$\sigma$, a zero insertion adds an element to the zero block of $\sigma$, and a bar insertion adds
a bar to the bar model of $\sigma$.
Let $\sigma = (B_1 \mid \cdots \mid B_{\ell}) \in {\mathcal{F}}_{n,k}$
be an $G_n$-face of dimension $k$ and let $\overline{\sigma}$ be the $G_{n-1}$-face
obtained by deleting $n$ from $\sigma$. Then $\overline{\sigma} \in {\mathcal{F}}_{n-1,k}$ if $\sigma$ arises from
$\overline{\sigma}$ by a star or zero insertion and $\overline{\sigma} \in {\mathcal{F}}_{n-1,k-1}$ if $\sigma$
arises from $\overline{\sigma}$ from a bar insertion.
Assume inductively that the monomial $\Psi(\overline{\sigma})$ has been defined, and that
this monomial lies in ${\mathcal{M}}_{n-1,k}$ or ${\mathcal{M}}_{n-1,k-1}$ according to whether $\overline{\sigma}$ lies
in ${\mathcal{F}}_{n-1,k}$ or ${\mathcal{F}}_{n-1,k-1}$.
We define $\Psi(\sigma)$ by the
rule
\begin{equation}
\Psi(\sigma) := \begin{cases}
\Psi(\overline{\sigma}) \cdot x_n^{r \cdot (k-j-1) + (r-c-1)} &
\text{if $n^c \in B_{\ell - j}$ and $B_{\ell - j}$ is a nonzero nonsingleton,} \\
\Psi(\overline{\sigma}) \cdot x_n^{kr} & \text{if $n$ lies in the zero block of $\sigma$,} \\
\Psi(\overline{\sigma}) \cdot {\mathbf {m}}(S)^r \cdot x_n^{r \cdot (k-j-1) + (r-c-1)} &
\text{if $B_{\ell - j} = \{n^c\}$ is a nonzero singleton,}
\end{cases}
\end{equation}
where in the third branch $S \subseteq [n-1]$ is the unique subset of size $|S| = n-k$ guaranteed by
Lemma~\ref{skip-monomial-multiply} applied to $m = \Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k-1}$.
\begin{example}
Let $(n,k,r) = (8,3,3)$
and consider the face $\sigma = (2 5 \mid 1^0 7^0 8^1 \mid 6^1 \mid 3^2 4^2) \in {\mathcal{F}}_{8,3}$.
In order to calculate $\Psi(\sigma) \in {\mathcal{M}}_{8,3}$, we refer to the following table.
Here `type' refers to the type of insertion (star, zero, or bar) of $n$ at each stage.
\begin{center}
\begin{tabular}{l | l | l | l | l | l}
$\sigma$ & $n$ & $k$ & type & $S$ & $\Psi(\sigma)$ \\ \hline
$(1^0)$ & $1$ & $1$ & & & $x_1^2$ \\
$(2 \mid 1^0)$ & $2$ & $1$ & zero & & $x_1^2 x_2^3$ \\
$(2 \mid 1^0 \mid 3^2)$ & $3$ & $2$ & bar & $2$ & $x_1^2 x_2^3 \cdot {\mathbf {m}}(2)^3 \cdot x_3^3 = x_1^2 x_2^6 x_3^3$ \\
$(2 \mid 1^0 \mid 3^2 4^2)$ & $4$ & $2$ & star & & $x_1^2 x_2^6 x_3^3 x_4^3$ \\
$(25 \mid 1^0 \mid 3^2 4^2)$ & $5$ & $2$ & zero & & $x_1^2 x_2^6 x_3^3 x_4^3 x_5^6$ \\
$(25 \mid 1^0 \mid 6^1 \mid 3^2 4^2)$ & $6$ & $3$ & bar & $235$ &
$x_1^2 x_2^6 x_3^3 x_4^3 x_5^6 \cdot {\mathbf {m}}(235)^3 \cdot x_6^4 = x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4$ \\
$(25 \mid 1^0 7^0 \mid 6^1 \mid 3^2 4^2)$ & $7$ & $3$ & star & &
$x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2$ \\
$(25 \mid 1^0 7^0 8^1 \mid 6^1 \mid 3^2 4^2)$ & $8$ & $3$ & star & &
$x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2 x_8^1$ \\
\end{tabular}
\end{center}
We conclude that
\begin{equation*} \Psi(\sigma) =
\Psi(2 5 \mid 1^0 7^0 8^1 \mid 6^1 \mid 3^2 4^2) = x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2 x_8^1
\in {\mathcal{M}}_{8,3}.
\end{equation*}
Observe that the zero block of $\sigma$ is $\{2,5\}$, and that $x_2$ and $x_5$ are the variables in $\Psi(\sigma)$
with exponent $k r = 3 \cdot 3 = 9$.
\end{example}
The next result is the extension of \cite[Thm. 4.9]{HRS} to $r \geq 2$.
The proof has the same basic structure, but one must account for the presence of zero blocks.
\begin{proposition}
\label{psi-is-bijection}
The map $\Psi: {\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$ is a bijection which restricts to a bijection
${\mathcal{OP}}_{n,k} \rightarrow {\mathbb{N}}N_{n,k}$. Moreover, for any $\sigma \in {\mathcal{F}}_{n,k}$ we have
\begin{equation}
{\mathrm {coinv}}(\sigma) = \deg(\Psi(\sigma)).
\end{equation}
Finally, if $\sigma \in {\mathcal{F}}_{n,k}$ has a zero block $Z$, then
\begin{equation}
Z = \{1 \leq i \leq n \,:\, \text{the exponent of $x_i$ in $\Psi(\sigma)$ is $kr$} \}.
\end{equation}
\end{proposition}
\begin{proof}
We need to show that $\Psi$ is a well-defined function ${\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$. To do this, we induct
on $n$ (with the base case $n = 1$ being clear). Let $\sigma = (B_1 \mid \cdots \mid B_{\ell}) \in {\mathcal{F}}_{n,k}$
and let $\overline{\sigma}$ be the
$G_{n-1}$-face obtained by removing $n$ from $\sigma$. Then $\overline{\sigma} \in {\mathcal{F}}_{n-1,k}$
(if the insertion type of $n$ was star or zero) or $\overline{\sigma} \in {\mathcal{F}}_{n-1,k-1}$ (if the insertion type
of $n$ was bar). We inductively assume that
$\Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k}$ or $\Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k-1}$ accordingly.
Suppose first that the insertion type of $n$ was star or zero, so that $\Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k}$.
Then we have
\begin{equation}
\Psi(\sigma) = \begin{cases}
\Psi(\overline{\sigma}) \cdot x_n^{r \cdot (k-j-1) + (r-c-1)} &
\text{if $n^c \in B_{\ell - j}$ and $B_{\ell - j}$ is a nonzero nonsingleton,} \\
\Psi(\overline{\sigma}) \cdot x_n^{kr} & \text{if $n$ lies in the zero block of $\sigma$.}
\end{cases}
\end{equation}
By induction and the inequalities $0 \leq j \leq k-1$ and $0 \leq c \leq r-1$,
we know that none of the variable powers $x_1^{kr+1}, \dots, x_n^{kr+1}$ divide $\Psi(\sigma)$.
Let $S \subseteq [n]$ be a subset of size $|S| = n-k+1$. Since $\Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k}^r$,
we know that ${\mathbf {x}}(S - \{\max(S)\})^r \nmid \Psi(\overline{\sigma})$. This implies that
${\mathbf {x}}(S)^r \nmid \Psi(\sigma)$. We conclude that $\Psi(\sigma) \in {\mathcal{M}}_{n,k}$.
| 4,001 | 83,708 |
en
|
train
|
0.173.14
|
We conclude that
\begin{equation*} \Psi(\sigma) =
\Psi(2 5 \mid 1^0 7^0 8^1 \mid 6^1 \mid 3^2 4^2) = x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2 x_8^1
\in {\mathcal{M}}_{8,3}.
\end{equation*}
Observe that the zero block of $\sigma$ is $\{2,5\}$, and that $x_2$ and $x_5$ are the variables in $\Psi(\sigma)$
with exponent $k r = 3 \cdot 3 = 9$.
\end{example}
The next result is the extension of \cite[Thm. 4.9]{HRS} to $r \geq 2$.
The proof has the same basic structure, but one must account for the presence of zero blocks.
\begin{proposition}
\label{psi-is-bijection}
The map $\Psi: {\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$ is a bijection which restricts to a bijection
${\mathcal{OP}}_{n,k} \rightarrow {\mathbb{N}}N_{n,k}$. Moreover, for any $\sigma \in {\mathcal{F}}_{n,k}$ we have
\begin{equation}
{\mathrm {coinv}}(\sigma) = \deg(\Psi(\sigma)).
\end{equation}
Finally, if $\sigma \in {\mathcal{F}}_{n,k}$ has a zero block $Z$, then
\begin{equation}
Z = \{1 \leq i \leq n \,:\, \text{the exponent of $x_i$ in $\Psi(\sigma)$ is $kr$} \}.
\end{equation}
\end{proposition}
\begin{proof}
We need to show that $\Psi$ is a well-defined function ${\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$. To do this, we induct
on $n$ (with the base case $n = 1$ being clear). Let $\sigma = (B_1 \mid \cdots \mid B_{\ell}) \in {\mathcal{F}}_{n,k}$
and let $\overline{\sigma}$ be the
$G_{n-1}$-face obtained by removing $n$ from $\sigma$. Then $\overline{\sigma} \in {\mathcal{F}}_{n-1,k}$
(if the insertion type of $n$ was star or zero) or $\overline{\sigma} \in {\mathcal{F}}_{n-1,k-1}$ (if the insertion type
of $n$ was bar). We inductively assume that
$\Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k}$ or $\Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k-1}$ accordingly.
Suppose first that the insertion type of $n$ was star or zero, so that $\Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k}$.
Then we have
\begin{equation}
\Psi(\sigma) = \begin{cases}
\Psi(\overline{\sigma}) \cdot x_n^{r \cdot (k-j-1) + (r-c-1)} &
\text{if $n^c \in B_{\ell - j}$ and $B_{\ell - j}$ is a nonzero nonsingleton,} \\
\Psi(\overline{\sigma}) \cdot x_n^{kr} & \text{if $n$ lies in the zero block of $\sigma$.}
\end{cases}
\end{equation}
By induction and the inequalities $0 \leq j \leq k-1$ and $0 \leq c \leq r-1$,
we know that none of the variable powers $x_1^{kr+1}, \dots, x_n^{kr+1}$ divide $\Psi(\sigma)$.
Let $S \subseteq [n]$ be a subset of size $|S| = n-k+1$. Since $\Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k}^r$,
we know that ${\mathbf {x}}(S - \{\max(S)\})^r \nmid \Psi(\overline{\sigma})$. This implies that
${\mathbf {x}}(S)^r \nmid \Psi(\sigma)$. We conclude that $\Psi(\sigma) \in {\mathcal{M}}_{n,k}$.
Now suppose that the insertion type of $n$ was bar, so that $\Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k-1}$.
We have
\begin{equation}
\Psi(\sigma) = \Psi(\overline{\sigma}) \cdot {\mathbf {m}}(S)^r \cdot x_n^{r \cdot (k-j-1) + (r - c- 1)},
\end{equation}
where $B_{\ell - j} = \{n^c\}$ and $S \subseteq [n-1]$ is the unique subset of size $|S| = n-k$ guaranteed
by Lemma~\ref{skip-monomial-multiply} applied to the monomial $m = \Psi(\overline{\sigma})$.
Since none of the variable powers $x_1^{(k-1)\cdot r + 1}, \dots, x_{n-1}^{(k-1) \cdot r + 1}$
divide $\Psi(\overline{\sigma})$, we conclude that none of the variable powers
$x_1^{kr+1}, \dots, x_n^{kr+1}$ divide $\Psi(\sigma)$. Let $T \subseteq [n]$ satisfy $|T| = n-k+1$.
If $n \notin T$, Lemma~\ref{skip-monomial-multiply} and induction guarantee that
${\mathbf {x}}(T)^r \nmid \Psi(\sigma)$. If $n \in T$, then the power of $x_n$ in the monomial ${\mathbf {x}}(T)^r$ is $kr$, so that
${\mathbf {x}}(T)^r \nmid \Psi(\sigma)$. We conclude that $\Psi(\sigma) \in {\mathcal{M}}_{n,k}$. This finishes the proof
that $\Psi: {\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$ is well-defined.
The relationship ${\mathrm {coinv}}(\sigma) = \deg(\Psi(\sigma))$ is clear from the inductive definition of $\Psi$ and
the previously described effect of insertion on the ${\mathrm {coinv}}$ statistic.
Let $\sigma \in {\mathcal{F}}_{n,k}$ be an $G_n$-face with zero block $Z$ (where $Z$ could be empty). We aim to show that
$Z = \{ 1 \leq i \leq n \,:\, \text{the exponent of $x_i$ in $\Psi(\sigma)$ is $kr$} \}$. To do this, we proceed by induction on
$n$ (the case $n = 1$ being clear). As before, let $\overline{\sigma}$ be the face obtained by erasing $n$ from $\sigma$
and let $\overline{Z}$ be the zero block of $\overline{\sigma}$. We inductively assume that
\begin{equation}
\overline{Z} = \begin{cases}
\{1 \leq i \leq n-1 \,:\, \text{the exponent of $x_i$ in $\Psi(\overline{\sigma})$ is $kr$} \} &
\text{if $\overline{\sigma} \in {\mathcal{F}}_{n-1, k}$}, \\
\{1 \leq i \leq n-1 \,:\, \text{the exponent of $x_i$ in $\Psi(\overline{\sigma})$ is $(k-1) \cdot r$} \} &
\text{if $\overline{\sigma} \in {\mathcal{F}}_{n-1, k-1}$}.
\end{cases}
\end{equation}
Suppose first that $\sigma$ was obtained from $\overline{\sigma}$ by a star insertion, so that
$\overline{\sigma} \in {\mathcal{F}}_{n-1,k}$ and $Z = \overline{Z}$. Since the exponent of $x_n$ in $\Psi(\sigma)$ is
$< kr$, the desired equality of sets holds in this case.
Next, suppose that $\sigma$ was obtained from $\overline{\sigma}$ by a zero insertion, so that
$\overline{\sigma} \in {\mathcal{F}}_{n-1,k}$ and $Z = \overline{Z} \cup \{n\}$. Since the exponent of $x_n$
in $\Psi(\sigma)$ is $kr$, the desired equality of sets holds in this case.
Finally, suppose that $\sigma$ was obtained from $\overline{\sigma}$ by a bar insertion, so that
$\overline{\sigma} \in {\mathcal{F}}_{n-1,k-1}$ and $Z = \overline{Z}$. Since the exponent of $x_n$ in $\Psi(\sigma)$ is
$< kr$, by induction we need only argue that $Z \subseteq S$, where $S \subseteq [n-1]$ is the
unique subset of size $|S| = n-k$ guaranteed by Lemma~\ref{skip-monomial-multiply} applied to
the monomial $m = \Psi(\overline{\sigma})$.
If the containment $Z \subseteq S$ failed to hold, let
$z = Z - S$ be arbitrary. By induction, the exponent of $x_z$ in $\Psi(\overline{\sigma})$ is $(k-1) \cdot r$.
Also, we have the divisibility ${\mathbf {x}}(S)^r \mid \Psi(\overline{\sigma}) \cdot {\mathbf {m}}(S)^r$.
If since $z \leq n-1$, we have the divisibility ${\mathbf {x}}(S \cup \{z\})^r \mid {\mathbf {x}}(S)^r \cdot x_z^{(k-1) \cdot r}$, so that
${\mathbf {x}}(S \cup \{z\})^r \mid \Psi(\overline{\sigma}) \cdot {\mathbf {m}}(S)^r$, which contradicts Lemma~\ref{skip-monomial-multiply}.
We conclude that $Z \subseteq S$. This proves the last sentence of the proposition.
We now turn our attention to proving that $\Psi: {\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$ is a bijection.
In order to prove that $\Psi$ is a bijection, we will construct its inverse $\Phi: {\mathcal{M}}_{n,k} \rightarrow {\mathcal{F}}_{n,k}$.
The map $\Phi$ will be defined by reversing the recursion used to define $\Psi$.
When $(n,k) = (1,0)$, there is only one choice for $\Phi$; we must define $\Phi: {\mathcal{M}}_{1,0} \rightarrow {\mathcal{F}}_{1,0}$
by
\begin{equation}
\Phi: 1 \mapsto (1).
\end{equation}
When $(n,k) = (1,1)$, since $\Phi$ is supposed to invert
the function $\Psi$, we are forced to define $\Phi: {\mathcal{M}}_{1,1} \rightarrow {\mathcal{F}}_{1,1}$ by
\begin{equation}
\Phi: x_1^c \mapsto (1^{r-c-1}),
\end{equation}
for $0 \leq c \leq r-1$.
In general, fix $k \leq n$ and assume inductively that the functions
\begin{equation*}
\begin{cases}
\Phi: {\mathcal{M}}_{n-1,k} \rightarrow {\mathcal{F}}_{n-1,k}, \\ \Phi: {\mathcal{M}}_{n-1,k-1} \rightarrow {\mathcal{F}}_{n-1,k-1}
\end{cases}
\end{equation*} have already
been defined. We aim to define the function $\Phi: {\mathcal{M}}_{n,k} \rightarrow {\mathcal{F}}_{n,k}$. To this end, let
$m = x_1^{a_1} \cdots x_{n-1}^{a_{n-1}} x_n^{a_n} \in {\mathcal{M}}_{n,k}$ be a monomial. Define a new monomial
$m' := x_1^{a_1} \cdots x_{n-1}^{a_{n-1}}$ by setting $x_n = 1$ in $m$.
Either $m' \in {\mathcal{M}}_{n-1,k}$ or $m' \notin {\mathcal{M}}_{n-1,k}$.
If $m' \in {\mathcal{M}}_{n-1,k}$, then $\Phi(m') = (B_1 \mid \cdots \mid B_{\ell}) \in {\mathcal{F}}_{n-1,k}^r$ is a
previously defined $G_{n-1}$-face. Our definition of $\Phi(m)$ depends on the exponent $a_n$ of $x_n$ in $m$.
\begin{itemize}
\item If $m' \in {\mathcal{M}}_{n-1,k}$ and $a_n < kr$, write $a_n = j \cdot r + (r-c-1)$ for a nonnegative integer $j$ and
$0 \leq c \leq r-1$. Let $\Phi(m)$ be obtained from $\Phi(m')$ by star inserting $n^c$ into the $j^{th}$ nonzero block
of $\Psi(m)$ from the left.
\item If $m' \in {\mathcal{M}}_{n-1,k}$ and $a_n = kr$, let $\Phi(m)$ be obtained from $\Phi(m')$ by adding $n$ to
the zero block of $\Phi(m')$ (creating a zero block if necessary).
\end{itemize}
If $m' \notin {\mathcal{M}}_{n-1,k}$, there exists a subset $S \subseteq [n-1]$ such that $|S| = n-k$ and
${\mathbf {x}}(S)^r \mid m'$. Lemma~\ref{skip-monomial-unique} guarantees that the set $S$ is unique.
{\bf Claim:} We have $\frac{m'}{{\mathbf {m}}(S)^r} \in {\mathcal{M}}_{n-1,k-1}$.
Since $m \in {\mathcal{M}}_{n,k}$, we know that ${\mathbf {x}}(T)^r \nmid \frac{m'}{{\mathbf {m}}(S)^r}$ for all $T \subseteq [n-1]$
with $|T| = n-k+1$. Let $1 \leq j \leq n-1$. We need to show $x_j^{(k-1) \cdot r + 1} \nmid \frac{m'}{{\mathbf {m}}(S)^r}$.
If $j \in S$ this is immediate from the fact that $x_j^{kr + 1} \nmid m'$. If $j \notin S$ and
$x_j^{(k-1) \cdot r + 1} \mid \frac{m'}{{\mathbf {m}}(S)^r}$, then $x_j^{(k-1) \cdot r + 1} \mid m'$ and
${\mathbf {x}}(S \cup \{j\})^r \mid m'$, a contradiction to the assumption $m = m' \cdot x_n^{a_n} \in {\mathcal{M}}_{n,k}$. This finishes the
proof of the Claim.
By the Claim, we recursively have an $G_{n-1}$-face $\Phi \left( \frac{m'}{{\mathbf {m}}(S)} \right) \in {\mathcal{F}}_{n-1,k-1}$.
Moreover, we have $a_n < kr$ (because otherwise ${\mathbf {x}}(S \cup \{n\})^r \mid m$, contradicting $m \in {\mathcal{M}}_{n,k}$).
Write $a_n = j \cdot r + (r-c-1)$ for some nonnegative integer $j$ and $0 \leq c \leq r-1$.
Form $\Phi(m)$ from $\Phi(m')$ by bar inserting the singleton block $\{n^c\}$ to the left of the $j^{th}$
nonzero block of $\Phi(m')$ from the left.
| 3,917 | 83,708 |
en
|
train
|
0.173.15
|
We now turn our attention to proving that $\Psi: {\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$ is a bijection.
In order to prove that $\Psi$ is a bijection, we will construct its inverse $\Phi: {\mathcal{M}}_{n,k} \rightarrow {\mathcal{F}}_{n,k}$.
The map $\Phi$ will be defined by reversing the recursion used to define $\Psi$.
When $(n,k) = (1,0)$, there is only one choice for $\Phi$; we must define $\Phi: {\mathcal{M}}_{1,0} \rightarrow {\mathcal{F}}_{1,0}$
by
\begin{equation}
\Phi: 1 \mapsto (1).
\end{equation}
When $(n,k) = (1,1)$, since $\Phi$ is supposed to invert
the function $\Psi$, we are forced to define $\Phi: {\mathcal{M}}_{1,1} \rightarrow {\mathcal{F}}_{1,1}$ by
\begin{equation}
\Phi: x_1^c \mapsto (1^{r-c-1}),
\end{equation}
for $0 \leq c \leq r-1$.
In general, fix $k \leq n$ and assume inductively that the functions
\begin{equation*}
\begin{cases}
\Phi: {\mathcal{M}}_{n-1,k} \rightarrow {\mathcal{F}}_{n-1,k}, \\ \Phi: {\mathcal{M}}_{n-1,k-1} \rightarrow {\mathcal{F}}_{n-1,k-1}
\end{cases}
\end{equation*} have already
been defined. We aim to define the function $\Phi: {\mathcal{M}}_{n,k} \rightarrow {\mathcal{F}}_{n,k}$. To this end, let
$m = x_1^{a_1} \cdots x_{n-1}^{a_{n-1}} x_n^{a_n} \in {\mathcal{M}}_{n,k}$ be a monomial. Define a new monomial
$m' := x_1^{a_1} \cdots x_{n-1}^{a_{n-1}}$ by setting $x_n = 1$ in $m$.
Either $m' \in {\mathcal{M}}_{n-1,k}$ or $m' \notin {\mathcal{M}}_{n-1,k}$.
If $m' \in {\mathcal{M}}_{n-1,k}$, then $\Phi(m') = (B_1 \mid \cdots \mid B_{\ell}) \in {\mathcal{F}}_{n-1,k}^r$ is a
previously defined $G_{n-1}$-face. Our definition of $\Phi(m)$ depends on the exponent $a_n$ of $x_n$ in $m$.
\begin{itemize}
\item If $m' \in {\mathcal{M}}_{n-1,k}$ and $a_n < kr$, write $a_n = j \cdot r + (r-c-1)$ for a nonnegative integer $j$ and
$0 \leq c \leq r-1$. Let $\Phi(m)$ be obtained from $\Phi(m')$ by star inserting $n^c$ into the $j^{th}$ nonzero block
of $\Psi(m)$ from the left.
\item If $m' \in {\mathcal{M}}_{n-1,k}$ and $a_n = kr$, let $\Phi(m)$ be obtained from $\Phi(m')$ by adding $n$ to
the zero block of $\Phi(m')$ (creating a zero block if necessary).
\end{itemize}
If $m' \notin {\mathcal{M}}_{n-1,k}$, there exists a subset $S \subseteq [n-1]$ such that $|S| = n-k$ and
${\mathbf {x}}(S)^r \mid m'$. Lemma~\ref{skip-monomial-unique} guarantees that the set $S$ is unique.
{\bf Claim:} We have $\frac{m'}{{\mathbf {m}}(S)^r} \in {\mathcal{M}}_{n-1,k-1}$.
Since $m \in {\mathcal{M}}_{n,k}$, we know that ${\mathbf {x}}(T)^r \nmid \frac{m'}{{\mathbf {m}}(S)^r}$ for all $T \subseteq [n-1]$
with $|T| = n-k+1$. Let $1 \leq j \leq n-1$. We need to show $x_j^{(k-1) \cdot r + 1} \nmid \frac{m'}{{\mathbf {m}}(S)^r}$.
If $j \in S$ this is immediate from the fact that $x_j^{kr + 1} \nmid m'$. If $j \notin S$ and
$x_j^{(k-1) \cdot r + 1} \mid \frac{m'}{{\mathbf {m}}(S)^r}$, then $x_j^{(k-1) \cdot r + 1} \mid m'$ and
${\mathbf {x}}(S \cup \{j\})^r \mid m'$, a contradiction to the assumption $m = m' \cdot x_n^{a_n} \in {\mathcal{M}}_{n,k}$. This finishes the
proof of the Claim.
By the Claim, we recursively have an $G_{n-1}$-face $\Phi \left( \frac{m'}{{\mathbf {m}}(S)} \right) \in {\mathcal{F}}_{n-1,k-1}$.
Moreover, we have $a_n < kr$ (because otherwise ${\mathbf {x}}(S \cup \{n\})^r \mid m$, contradicting $m \in {\mathcal{M}}_{n,k}$).
Write $a_n = j \cdot r + (r-c-1)$ for some nonnegative integer $j$ and $0 \leq c \leq r-1$.
Form $\Phi(m)$ from $\Phi(m')$ by bar inserting the singleton block $\{n^c\}$ to the left of the $j^{th}$
nonzero block of $\Phi(m')$ from the left.
For an example of the map $\Phi$, let $(n,k,r) = (8,3,3)$ and
let $m = x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2 x_8^1 \in {\mathcal{M}}_{8,3}$. The following table
computes $\Phi(m) = (25 \mid 1^0 7^0 8^1 \mid 6^1 \mid 3^2 4^2)$.
Throughout this calculation, the nonzero blocks will successively become frozen (i.e., written in bold).
\begin{small}
\begin{center}
\begin{tabular}{l | l | l | l | l | l | l | l}
$m$ & $m'$ & $(n,k)$ & type & $S$ & $\frac{m'}{{\mathbf {m}}(S)^r}$ & $(j,c)$ & $\Phi(m)$ \\ \hline
$x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2 x_8^1$ &
$x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2$ & $(8,3)$ & star & & & $(0,1)$ & $(8^1 \mid \cdot \mid \cdot)$ \\
$x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2$ &
$x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4$ & $(7,3)$ & star & & & $(0,0)$ & $(7^0 8^1 \mid \cdot \mid \cdot)$ \\
$x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4$ &
$x_1^2 x_2^9 x_3^6 x_4^3 x_5^9$ & $(6,3)$ & bar & $235$ & $x_1^2 x_2^6 x_3^3 x_4^3 x_5^6$
& $(1,1)$ & $(7^0 8^1 \mid {\bf 6^1} \mid \cdot)$ \\
$x_1^2 x_2^6 x_3^3 x_4^3 x_5^6$ & $x_1^2 x_2^6 x_3^3 x_4^3$ & $(5,2)$ & zero & & & &
$(5 \mid 7^0 8^1 \mid {\bf 6^1} \mid \cdot)$ \\
$x_1^2 x_2^6 x_3^3 x_4^3$ & $x_1^2 x_2^6 x_3^3$ & $(4,2)$ & star & & & $(1,2)$ &
$(5 \mid 7^0 8^1 \mid {\bf 6^1} \mid 4^2 )$ \\
$x_1^2 x_2^6 x_3^3$ & $x_1^2 x_2^6$ & $(3,2)$ & bar & 2 & $x_1^2 x_2^3$ & $(1,2)$ &
$(5 \mid 7^0 8^1 \mid {\bf 6^1} \mid {\bf 3^2 4^2} )$ \\
$x_1^2 x_2^3$ & $x_1^2$ & $(2,1)$ & zero & & & & $(25 \mid 7^0 8^1 \mid {\bf 6^1} \mid {\bf 3^2 4^2})$ \\
$x_1^2$ & 1 & $(1,1)$ & bar & $\varnothing$ & 1 & $(0,0)$ & $(25 \mid {\bf 1^0 7^0 8^1} \mid {\bf 6^1} \mid {\bf 3^2 4^2})$
\end{tabular}
\end{center}
\end{small}
To proceed from one row of the table to the next, we use the following procedure.
\begin{itemize}
\item Define $m$ to be the monomial $m'$ from the above row (if the insertion type in the
above row was star or zero) or the monomial
$\frac{m'}{{\mathbf {m}}(S)^r}$ from the above row (if the insertion type in the above row was bar).
\item Define $(n,k)$ in the current row to be $(n-1,k)$ from the above row (if the insertion type in the above
row was star or zero) or $(n-1,k-1)$ from the above row (if the insertion type in the above row was bar).
\item Using the $(n,k)$ in the current row, define $m'$ from $m$ using the relation $m = m' \cdot x_n^{a_n}$.
\item If $a_n = kr$, define the insertion type of the current row to be zero, let $\Phi(m)$ be obtained from the above
row by adjoining $n$ to its zero block (creating a new zero block if necessary), and move on to the next row.
\item If $a_n < kr$, define $(j,c)$ by the relation $a_n = j \cdot r + (r-c-1)$, where $j$ is nonnegative and $0 \leq c \leq r-1$.
\item If $a_n < kr$ and $m' \in {\mathcal{M}}_{n-1,k}$, define the insertion type of the current row to be star. Let
$\Phi(m)$ obtained from the above row by inserting $n^c$ into the $j^{th}$ nonzero nonfrozen block from the left, and
move on to the next row.
\item If $a_n < kr$ and $m' \notin {\mathcal{M}}_{n-1,k}$, define the insertion type of the current row to be bar. Let
$S \subseteq [n-1]$ be the set defined by Lemma~\ref{skip-monomial-unique} as above. Calculate $\frac{m'}{{\mathbf {m}}(S)^r}$.
Let $\Phi(m)$ be obtained from the above row by inserting $n^c$ into the $j^{th}$ nonzero nonfrozen block from
the left and freezing that block. Move on to the next row.
\end{itemize}
We leave it for the reader to check that the procedure defined above reverses the recursive definition of $\Psi$, so
that $\Phi$ and $\Psi$ are mutually inverse maps.
The fact that $\Psi$ restricts to give a bijection ${\mathcal{OP}}_{n,k} \rightarrow {\mathbb{N}}N_{n,k}$ follows from the assertion about
zero blocks.
\end{proof}
| 3,190 | 83,708 |
en
|
train
|
0.173.16
|
We are ready to identify the standard monomial bases of our quotient rings $R_{n,k}$ and $S_{n,k}$.
The proof of the following result is analogous to the proof of \cite[Thm. 4.10]{HRS}.
\begin{theorem}
\label{m-is-basis}
Let $n \geq k$ be positive integers and
endow monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ with the lexicographic term order $<$.
\begin{itemize}
\item
The collection ${\mathcal{M}}_{n,k}$ of $(n,k)$-nonskip monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$
is the standard monomial basis of $R_{n,k}$.
\item
The collection ${\mathbb{N}}N_{n,k}$ of strongly $(n,k)$-nonskip monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$
is the standard monomial basis of $S_{n,k}$.
\end{itemize}
\end{theorem}
\begin{proof}
Let us begin with the case of $R_{n,k}$. Recall the point set $Y_{n,k} \subseteq {\mathbb {C}}^n$.
Let ${\mathcal{B}}_{n,k}$ be the standard monomial basis of the quotient ring ${\mathbb {C}}[{\mathbf {x}}_n] / {\mathbf {T}}(Y_{n,k})$.
Since $\dim({\mathbb {C}}[{\mathbf {x}}_n] / {\mathbf {T}}(Y_{n,k})) = | Y_{n,k} | = | {\mathcal{F}}_{n,k} |$, we have
\begin{equation}
|{\mathcal{B}}_{n,k}| = | {\mathcal{F}}_{n,k} |.
\end{equation}
On the other hand, Lemma~\ref{i-contained-in-t} says that $I_{n,k} \subseteq {\mathbf {T}}(Y_{n,k})$. This leads
to the containment of initial ideals
\begin{equation}
{\mathrm {in}}_<(I_{n,k}) \subseteq {\mathrm {in}}_<({\mathbf {T}}(Y_{n,k})).
\end{equation}
If ${\mathbb {C}}C_{n,k}$ is the standard monomial basis for $R_{n,k} = {\mathbb {C}}[{\mathbf {x}}_n] / I_{n,k}$,
this implies
\begin{equation}
{\mathcal{B}}_{n,k} \subseteq {\mathbb {C}}C_{n,k}.
\end{equation}
However, Lemma~\ref{skip-leading-terms} and the definition of $(n,k)$-nonskip monomials implies
\begin{equation}
{\mathbb {C}}C_{n,k} \subseteq {\mathcal{M}}_{n,k}.
\end{equation}
Proposition~\ref{psi-is-bijection} shows that $|{\mathcal{M}}_{n,k}| = |{\mathcal{F}}_{n,k}|$. Since we already know
${\mathcal{B}}_{n,k} \subseteq {\mathcal{M}}_{n,k}$ and $|{\mathcal{B}}_{n,k}| = |{\mathcal{F}}_{n,k}|$, we conclude that
\begin{equation}
{\mathcal{B}}_{n,k} = {\mathcal{M}}_{n,k},
\end{equation}
which proves the first assertion of the theorem.
The case of $S_{n,k}$ is similar. An identical chain of reasoning, this time involving $Z_{n,k}$ instead
of $Y_{n,k}$, shows that ${\mathbb{N}}N_{n,k}$ contains the standard monomial basis for $S_{n,k}$.
Propososition~\ref{psi-is-bijection} implies that both $|{\mathbb{N}}N_{n,k}|$ and
$\dim(S_{n,k})$ equal $|{\mathcal{OP}}_{n,k}|$.
\end{proof}
Theorem~\ref{m-is-basis} makes it easy to compute the Hilbert series of $R_{n,k}$ and $S_{n,k}$.
\begin{corollary}
\label{hilbert-series-corollary}
The graded vector spaces
$R_{n,k}$ and $S_{n,k}$ have the following Hilbert series.
\begin{align}
{\mathrm {Hilb}}(R_{n,k}; q) &= \sum_{z = 0}^n {n \choose z} q^{krz} \cdot {\mathrm {rev}}_q( [r]_q^{n-z} \cdot [k]!_{q^r} \cdot {\mathrm {Stir}}_{q^r}(n-z,k)) \\
&= \sum_{z = 0}^n {n \choose z} q^{krz} \cdot [r]_q^{n-z} \cdot [k]!_{q^r} \cdot {\mathrm {rev}}_q({\mathrm {Stir}}_{q^r}(n-z,k)). \\
{\mathrm {Hilb}}(S_{n,k}; q) &= {\mathrm {rev}}_q ([r]_q^n \cdot [k]!_{q^r} \cdot {\mathrm {Stir}}_{q^r}(n,k) ) \\ &=
[r]_q^n \cdot [k]!_{q^r} \cdot {\mathrm {rev}}_q ({\mathrm {Stir}}_{q^r}(n,k)).
\end{align}
\end{corollary}
\begin{proof}
By Theorem~\ref{m-is-basis} and Proposition~\ref{psi-is-bijection}, we have
\begin{align}
{\mathrm {Hilb}}(R_{n,k}; q) &= \sum_{\sigma \in {\mathcal{F}}_{n,k}} q^{{\mathrm {coinv}}(\sigma)}, \\
{\mathrm {Hilb}}(S_{n,k}; q) &= \sum_{\sigma \in {\mathcal{OP}}_{n,k}} q^{{\mathrm {coinv}}(\sigma)},
\end{align}
so that the proof of the corollary reduces to calculating the generating function of ${\mathrm {coinv}}$ on
${\mathcal{F}}_{n,k}$ and ${\mathcal{OP}}_{n,k}$.
It follows from the work of Steingr\'imsson \cite{Stein} that the generating function of ${\mathrm {coinv}}$ on ${\mathcal{OP}}_{n,k}$ is
\begin{equation}
\sum_{\sigma \in {\mathcal{OP}}_{n,k}} q^{{\mathrm {coinv}}(\sigma)} = {\mathrm {rev}}_q ([r]_q^n \cdot [k]!_{q^r} \cdot {\mathrm {Stir}}_{q^r}(n,k)),
\end{equation}
proving the desired expression for ${\mathrm {Hilb}}(S_{n,k}; q)$. For the derivation of
${\mathrm {Hilb}}(R_{n,k}; q)$, simply note that a zero block $Z$ of an
$G_n$-face $\sigma \in {\mathcal{F}}_{n,k}$ contributes $kr \cdot |Z|$ to ${\mathrm {coinv}}(\sigma)$.
\end{proof}
The proof of Theorem~\ref{m-is-basis} also gives the {\em ungraded} isomorphism
type of the $G_n$-modules $R_{n,k}$ and $S_{n,k}$.
\begin{corollary}
\label{ungraded-isomorphism-type}
As {\em ungraded}
$G_n$-modules we have
$R_{n,k} \cong {\mathbb {C}}[{\mathcal{F}}_{n,k}]$ and $S_{n,k} \cong {\mathbb {C}}[{\mathcal{OP}}_{n,k}]$.
\end{corollary}
\begin{proof}
We have the following isomorphisms of ungraded $G_n$-modules:
\begin{equation}
{\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {T}}(Y_{n,k}) \cong {\mathbb {C}}[{\mathbf {x}}_n]/I_{n,k} \cong {\mathbb {C}}[{\mathcal{F}}_{n,k}]
\end{equation}
and
\begin{equation}
{\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {T}}(Z_{n,k}) \cong {\mathbb {C}}[{\mathbf {x}}_n]/J_{n,k} \cong {\mathbb {C}}[{\mathcal{OP}}_{n,k}].
\end{equation}
The proof of Theorem~\ref{m-is-basis} shows that ${\mathbf {T}}(Y_{n,k}) = I_{n,k}$ and
${\mathbf {T}}(Z_{n,k}) = J_{n,k}$.
\end{proof}
Theorem~\ref{m-is-basis} identifies the standard monomial bases ${\mathcal{M}}_{n,k}$ and
${\mathbb{N}}N_{n,k}$ for the quotient rings
$R_{n,k}$ and $S_{n,k}$ with respect to the lexicographic term order. However, checking whether
monomial $m \in {\mathbb {C}}[{\mathbf {x}}_n]$ is (strongly) $(n,k)$-nonskip involves checking whether ${\mathbf {x}}(S)^r \mid m$
for all possible subsets $S \subseteq [n]$ with $|S| = n-k+1$. The next result gives a more direct characterization
of the monomials of ${\mathcal{M}}_{n,k}$ and ${\mathbb{N}}N_{n,k}$.
A {\em shuffle} of a pair of sequences
$(a_1, \dots, a_p)$ and $(b_1, \dots, b_q)$ is an interleaving $(c_1, \dots, c_{p+q})$ of these sequences
which preserves the relative order of the $a$'s and $b$'s.
The following result is an extension of \cite[Thm. 4.13]{HRS} to $r \geq 2$.
\begin{theorem}
\label{artin-basis}
We have
\begin{equation}
{\mathcal{M}}_{n,k} =
\left\{ x_1^{a_1} \cdots x_n^{a_n} \,:\,
\begin{array}{c}
\text{$(a_1, \dots, a_n)$ is componentwise $\leq$ some shuffle of} \\
\text{$(r-1, 2r-1, \dots, kr-1)$ and $(kr, \dots, kr)$}
\end{array}
\right\}, \\
\end{equation}
where there are $n-k$ copies of $kr$.
Moreover, we have
\begin{equation}
{\mathbb{N}}N_{n,k} =
\left\{ x_1^{a_1} \cdots x_n^{a_n} \,:\,
\begin{array}{c}
\text{$(a_1, \dots, a_n)$ is componentwise $\leq$ some shuffle of} \\
\text{$(r-1, 2r-1, \dots, kr-1)$ and $(kr-1, \dots, kr-1)$}
\end{array}
\right\}, \\
\end{equation}
where there are $n-k$ copies of $kr-1$.
\end{theorem}
\begin{proof}
Let ${\mathcal{A}}_{n,k}$ and ${\mathcal{B}}_{n,k}$ denote the sets of monomials
right-hand sides of the top and bottom asserted equalities,
respectively. A direct check shows that any shuffle of $(r-1, 2r-1, \dots, kr-1)$ and $(kr, \dots, kr)$ is
$(n,k)$-nonskip and that any shuffle of $(r-1, 2r-1, \dots, kr-1)$ and $(kr-1, \dots, kr-1)$ is
$(n,k)$-strongly nonskip. This implies that ${\mathcal{A}}_{n,k} \subseteq {\mathcal{M}}_{n,k}$
and ${\mathcal{B}}_{n,k} \subseteq {\mathbb{N}}N_{n,k}$.
To verify the reverse containment, consider the bijection $\Psi: {\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$
of Proposition~\ref{psi-is-bijection}. We argue that $\Psi({\mathcal{F}}_{n,k}) \subseteq {\mathcal{A}}_{n,k}$.
Let $\sigma \in {\mathcal{F}}_{n,k}$ be an $G_n$-face and let $\overline{\sigma}$ be the $G_{n-1}$-face obtained
by removing $n$ from $\sigma$.
{\bf Case 1:} {\em $n$ is not contained in a nonzero singleton block of $\sigma$.}
In this case we have $\overline{\sigma} \in {\mathcal{F}}_{n-1,k}$.
We inductively assume $\Psi(\overline{\sigma}) \in {\mathcal{A}}_{n-1,k}$. This means that there is some
shuffle $(a_1, \dots, a_{n-1})$ of the sequences $(r-1, 2r-1, \dots, kr-1)$ and $(kr, \dots, kr)$ such that
$\Psi(\overline{\sigma}) \mid x_1^{a_1} \cdots x_{n-1}^{a_{n-1}}$ (where there are $n-k-1$ copies of $kr$).
By the definition of $\Psi$ we have
$\Psi(\sigma) \mid x_1^{a_1} \cdots x_{n-1}^{a_{n-1}} x_n^{kr}$, and
$(a_1, \dots, a_{n-1}, kr)$ is a shuffle of $(r-1, 2r-1, \dots, kr-1)$ and $(kr, kr, \dots, kr)$,
where there are $n-k$ copies of $kr$. We conclude that $\Psi(\sigma) \in {\mathcal{A}}_{n,k}$
{\bf Case 2:} {\em $n$ is contained in a nonzero singleton block of $\sigma$.}
In this case we have $\overline{\sigma} \in {\mathcal{F}}_{n-1,k-1}$.
We inductively assume $\Psi(\overline{\sigma}) \in {\mathcal{A}}_{n-1,k-1}$.
We have
$\Psi(\sigma) = \Psi(\overline{\sigma}) \cdot {\mathbf {m}}(S)^r \cdot x_n^i$ for some $0 \leq i \leq kr-1$, where
$S \subseteq [n-1], |S| = n-k,$ and ${\mathbf {x}}(S)^r \mid (\Psi(\overline{\sigma}) \cdot {\mathbf {m}}(S)^r)$. Consider the shuffle
$(a_1, \dots, a_n)$ of $(r-1, 2r-1, \dots, kr-1)$ and $(kr, kr, \dots, kr)$ determined by $a_j = kr$ if and only if
$j \in S$.
We claim $\Psi(\sigma) \mid x_1^{a_1} \cdots x_n^{a_n}$, so that $\Psi(\sigma) \in {\mathcal{A}}_{n,k}$.
To see this,
write $\Psi(\sigma) = x_1^{b_1} \cdots x_n^{b_n}$.
Since $\Psi(\sigma) \in {\mathcal{M}}_{n,k}$ we know that $0 \leq b_j \leq kr$ for all $1 \leq j \leq n$.
If $\Psi(\sigma) \nmid x_1^{a_1} \cdots x_n^{a_n}$, choose $1 \leq j \leq n$ with $a_j < b_j$; by the last sentence
we know $j \notin S$. A direct check shows that ${\mathbf {x}}(S \cup \{j\})^r \mid \Psi(\sigma)$, which contradicts
$\Psi(\sigma) \in {\mathcal{M}}_{n,k}$. We conclude that $\Psi(\sigma) \in {\mathcal{A}}_{n,k}$. This completes the
proof that $\Psi({\mathcal{F}}_{n,k}) \subseteq {\mathcal{A}}_{n,k}$.
To prove the second assertion of the theorem, one verifies $\Psi({\mathcal{OP}}_{n,k}) \subseteq {\mathcal{B}}_{n,k}$.
The argument follows a similar inductive pattern and is left to the reader.
\end{proof}
| 4,024 | 83,708 |
en
|
train
|
0.173.17
|
For example, consider the case $(n,k,r) = (5,3,2)$. The shuffles of $(1,3,5)$ and $(6,6)$ are the ten sequences
\begin{center}
$\begin{array}{ccccc}
(1,3,5,6,6) & (1,3,6,5,6) & (1,6,3,5,6) & (6,1,3,5,6) & (1,3,6,6,5) \\
(1,6,3,6,5) & (6,1,3,6,5) & (1,6,6,3,5) & (6,1,6,3,5) & (6,6,1,3,5),
\end{array}$
\end{center}
so that the standard monomial basis ${\mathcal{M}}_{5,3}$ of $R_{5,3}$ with respect to the lexicographic
term order consists of those monomials $x_1^{a_1} \cdots x_5^{a_5}$ whose exponent sequence
$(a_1, \dots, a_5)$ is componentwise $\leq$ at least one of these ten sequences.
On the other hand, the shuffles of $(1,3,5)$ and $(5,5)$ are the six sequences
\begin{center}
$\begin{array}{cccccc}
(1,3,5,5,5) & (1,5,3,5,5) & (5,1,3,5,5) & (1,5,5,3,5) & (5,1,5,3,5) & (5,5,1,3,5),
\end{array}$
\end{center}
so that the standard monomial basis ${\mathbb{N}}N_{5,3}$ of $S_{5,3}$ consists of those monomials
$x_1^{a_1} \cdots x_5^{a_5}$ where $(a_1, \dots, a_5)$ is componentwise $\leq$ at least one of these
six sequences.
The next result gives the reduced Gr\"obner bases of the ideals $I_{n,k}$ and $J_{n,k}$. It is
the extension of \cite[Thm. 4.14]{HRS} to $r \geq 2$.
\begin{theorem}
\label{groebner-basis}
Endow monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$
with the lexicographic term order.
\begin{itemize}
\item
The variable powers
$x_1^{kr+1}, \dots, x_n^{kr+1}$, together with the polynomials
\begin{equation*}
\overline{\kappa_{\overline{\gamma(S)}}({\mathbf {x}}_n^{r})}
\end{equation*}
for $S \subseteq [n]$ with $|S| = n-k+1$, form a Gr\"obner basis for the ideal $I_{n,k} \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$.
If $n > k > 0$, this Gr\"obner basis is reduced.
\item
The variable powers $x_1^{kr}, \dots, x_n^{kr}$, together with the polynomials
\begin{equation*}
\overline{\kappa_{\overline{\gamma(S)}}({\mathbf {x}}_n^{r})}
\end{equation*}
for $S \subseteq [n-1]$ with $|S| = n-k+1$, form a Gr\"obner basis for the ideal $J_{n,k} \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$.
If $n > k > 0$, this Gr\"obner basis is reduced.
\end{itemize}
\end{theorem}
\begin{proof}
By Lemma~\ref{demazures-in-ideal}, the relevant polynomials $\overline{\kappa_{\overline{\gamma(S)}}({\mathbf {x}}_n^{r})}$
lie
in the ideals $I_{n,k}$ and $J_{n,k}$; the given variable powers are generators of these ideals.
By Theorem~\ref{m-is-basis}, the number of monomials which do not divide any of the initial terms
of the given polynomials equals the dimension of the corresponding quotient ring in either case.
It follows that the given sets of polynomials are Gr\"obner bases for $I_{n,k}$ and $J_{n,k}$.
Suppose $n > k > 0$. By Lemma~\ref{demazure-initial-term}, for any distinct polynomials $f, g$ listed in
either bullet point, the leading monomial of $f$ has coefficient $1$ and does not divide any
monomial in $g$. This implies the claim about reducedness.
\end{proof}
| 1,147 | 83,708 |
en
|
train
|
0.173.18
|
\section{Generalized descent monomial basis}
\label{Descent}
\subsection{A straightening algorithm}
For an $r$-colored permutation $g = \pi_1^{c_1} \dots \pi_n^{c_n} \in G_n$,
let $d(g) = (d_1(g), \dots, d_n(g))$ be the sequence of nonnegative integers given by
\begin{equation}
\label{d-sequence-definition}
d_i(g) := | \{ j \in {\mathrm {Des}}(\pi_1^{c_1} \dots \pi_n^{c_n}) \,:\, j \geq i \} |.
\end{equation}
We have $d_1(g) = {\mathrm {des}}(g)$ and $d_1(g) \geq \cdots \geq d_n(g)$.
Following Bango and Biagioli \cite{BB},
we define the {\em descent monomial}
$b_g \in {\mathbb {C}}[{\mathbf {x}}_n]$ by the equation
\begin{equation}
\label{gs-monomial-equation}
b_g := \prod_{i = 1}^n x_{\pi_i}^{r d_i(g) + c_i}.
\end{equation}
When $r = 1$, the monomials $b_g$ were introduced by Garisa \cite{Garsia}
and further studied by Garsia and Stanton \cite{GS}. Garsia \cite{Garsia}
proved that the collection of monomials $\{b_g \,:\, g \in {\mathfrak{S}}_n\}$ descends to a basis for the
coinvariant algebra attached to ${\mathfrak{S}}_n$.
When $r = 2$, a slightly different family of monomials was introduced by
Adin, Brenti, and Roichman \cite{ABR}; they proved that their monomials descend to a basis
for the coinvariant algebra attached to the hyperoctohedral group.
Bango and Biagioli \cite{BB} introduced the collection of monomials above; they proved
that they descend to a basis for the coinvariant algebra attached to $G_n$
(and, more generally, that an appropriate subset of them descend to a basis of the
coinvariant algebra for the
$G(r,p,n)$ family of complex reflection groups).
We will find it convenient to extend the definition of $b_g$ somewhat to `partial colored permutations'
$g = \pi_1^{c_1} \dots \pi_m^{c_m}$, where $\pi_1, \dots, \pi_m$ are distinct integers in $[n]$
and $0 \leq c_1, \dots, c_m \leq r-1$ are colors. The formulae
(\ref{d-sequence-definition}) and (\ref{gs-monomial-equation}) still make sense in this case and
define a monomial $b_g \in {\mathbb {C}}[{\mathbf {x}}_n]$.
As an example of descent monomials, consider the case $(n,r) = (8,3)$ and
$g = \pi_1^{c_1} \dots \pi_8^{c_8} = 3^2 7^0 1^1 6^1 8^1 2^0 4^2 5^1 \in G_8$.
We calculate ${\mathrm {Des}}(g) = \{2,6\}$, so that $d(g) = (2,2,1,1,1,1,0,0)$.
The monomial $b_g \in {\mathbb {C}}[{\mathbf {x}}_8]$ is given by
\begin{equation*}
b_g = x_3^8 x_7^6 x_1^4 x_6^4 x_8^4 x_2^3 x_4^2 x_5^1.
\end{equation*}
Let $\overline{g} = 6^1 8^1 2^0 4^2 5^1$ be the sequence obtained by erasing the first three letters of $g$.
We leave it for the reader to check that
\begin{equation*}
b_{\overline{g}} = x_6^4 x_8^4 x_2^3 x_4^2 x_5^1,
\end{equation*}
so that $b_{\overline{g}}$ is obtained by truncating $b_g$. We formalize this as an observation.
\begin{observation}
\label{truncation-observation}
Let $g = \pi_1^{c_1} \dots \pi_n^{c_n} \in G_n$ and let
$\overline{g} = \pi_m^{c_m} \dots \pi_n^{c_n}$ for some $1 \leq m \leq n$. If
$b_g = x_{\pi_1}^{a_1} \cdots x_{\pi_n}^{a_n}$, then $b_{\overline{g}} = x_{\pi_m}^{a_m} \cdots x_{\pi_n}^{a_n}$.
\end{observation}
The most important property of the $b_g$ monomials will be a related
{\em Straightening Lemma} of Bango and Biagioli \cite{BB} (see also \cite{ABR}).
This lemma uses a certain partial order
on monomials.
In order to define this partial order, we will attach colored permutations to monomials as follows.
\begin{defn}
\label{group-element-definition}
Let $m = x_1^{a_1} \cdots x_n^{a_n}$ be a monomial in ${\mathbb {C}}[{\mathbf {x}}_n]$. Let
\begin{equation*}
g(m) = \pi_1^{c_1} \dots \pi_n^{c_n} \in G_n
\end{equation*}
be the $r$-colored permutation determined uniquely by the following
conditions:
\begin{itemize}
\item $a_{\pi_i} \geq a_{\pi_{i+1}}$ for all $1 \leq i < n$,
\item if $a_{\pi_i} = a_{\pi_{i+1}}$ then $\pi_i < \pi_{i+1}$, and
\item $a_i \equiv c_i$ (mod $r$).
\end{itemize}
\end{defn}
If $m = x_1^{a_1} \cdots x_n^{a_n}$ is a monomial in ${\mathbb {C}}[{\mathbf {x}}_n]$, let
$\lambda(m) = (\lambda(m)_1 \geq \cdots \geq \lambda(m)_n)$ be the
nonincreasing
rearrangement of the
exponent sequence $(a_1, \dots, a_n)$.
The following partial order on monomials was introduced in \cite[Sec. 3.3]{ABR}.
\begin{defn}
\label{partial-order-definition}
Let $m, m' \in {\mathbb {C}}[{\mathbf {x}}_n]$
be monomials and
let $g(m) = \pi_1^{c_1} \dots \pi_n^{c_n}$ and $g(m') = \sigma_1^{e_1} \dots \sigma_n^{e_n}$ be the elements
of $G_n$ determined by Definition~\ref{group-element-definition}
We write $m \prec m'$ if $\deg(m) = \deg(m')$
and one of the following conditions holds:
\begin{itemize}
\item $\lambda(m) <_{dom} \lambda(m')$, or
\item $\lambda(m) = \lambda(m')$ and ${\mathrm {inv}}(\pi) > {\mathrm {inv}}(\sigma)$.
\end{itemize}
\end{defn}
Observe the numbers ${\mathrm {inv}}(\pi)$ and ${\mathrm {inv}}(\sigma)$ appearing in the second bullet
refer to the inversion numbers of the {\em uncolored} permutations $\pi, \sigma \in {\mathfrak{S}}_n$.
In order to state the Straightening Lemma, we will need to attach a length $n$ sequence
$\mu(m) = (\mu(m)_1 \geq \cdots \geq \mu(m)_n)$ of nonnegative integers to any monomial
$m$. The basic tool for doing this is as follows; its proof is similar to that of
\cite[Claim 5.1]{ABR}.
\begin{lemma}
\label{mu-lemma}
Let $m = x_1^{a_1} \cdots x_n^{a_n} \in {\mathbb {C}}[{\mathbf {x}}_n]$ be a monomial, let
$g(m) = \pi_1^{c_1} \dots \pi_n^{c_n} \in G_n$
be the associated group element, and let
$d(m) := d(g(m)) = (d_1 \geq \cdots \geq d_n)$. The sequence
\begin{equation}
a_{\pi_1} - r d_1 - c_1, \dots, a_{\pi_n} - r d_n - c_n
\end{equation}
of exponents of $\frac{m}{b_{g(m)}}$ is a weakly decreasing sequence of nonnegative
multiples of $r$.
\end{lemma}
Lemma~\ref{mu-lemma} justifies the following definition.
\begin{defn}
\label{mu-definition}
Let $m = x_1^{a_1} \cdots x_n^{a_n}$ be a monomial and
let $(a_{\pi_1} - r d_1 - c_1 \geq \dots \geq a_{\pi_n} - r d_n - c_n)$
be the weakly decreasing sequence of nonnegative multiples of $r$ guaranteed by
Lemma~\ref{mu-lemma}.
Let $\mu(m) = (\mu(m)_1, \dots, \mu(m)_n)$ be the partition {\em conjugate to} the partition
\begin{equation*}
\left( \frac{a_{\pi_1} - r d_1 - c_1}{r} , \dots, \frac{a_{\pi_n} - r d_n - c_n}{r} \right).
\end{equation*}
\end{defn}
As an example, consider $(n,r) = (8,3)$ and $m = x_1^7 x_2^3 x_3^{14} x_4^2 x_5^1 x_6^7 x_7^{12} x_8^7$.
We have $\lambda(m) = (14,12,7,7,7,3,2,1)$.
We calculate $g(m) \in G_8$ to be
$g(m) = 3^2 7^0 1^1 6^1 8^1 2^0 4^2 5^1$. From this it follows that
$d(m) = (2,2,1,1,1,1,0,0)$. The sequence $\mu(m)$ is determined by the equation
\begin{equation*}
3 \cdot \mu(m)' = \lambda(m) - 3 \cdot d(m) - (2,0,1,1,1,0,2,1),
\end{equation*}
from which it follows that $\mu(m)' = (2,2,1,1,1,0,0,0)$ and $\mu(m) = (5,2,0,0,0,0,0,0)$.
The Straightening Lemma of Bango and Biagioli \cite{BB}
for monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ is as follows.
\begin{lemma}
\label{straightening-lemma}
(Bango-Biagioli \cite{BB})
Let $m = x_1^{a_1} \cdots x_n^{a_n}$ be a monomial in ${\mathbb {C}}[{\mathbf {x}}_n]$. We have
\begin{equation}
m = e_{\mu(m)}({\mathbf {x}}_n^r) \cdot b_{g(m)} + \Sigma,
\end{equation}
where $\Sigma$ is a linear combination of monomials $m' \in {\mathbb {C}}[{\mathbf {x}}_n]$ which
satisfy $m' \prec m$.
\end{lemma}
| 3,046 | 83,708 |
en
|
train
|
0.173.19
|
\subsection{The rings $S_{n,k}$}
We are ready to introduce our descent-type monomials for the rings $S_{n,k}$.
This is an extension to $r \geq 1$ of the $(n,k)$-Garsia-Stanton monomials of \cite[Sec. 5]{HRS}.
\begin{defn}
\label{gs-monomial-definition}
Let $n \geq k$.
The collection ${\mathcal{D}}_{n,k}$ of {\em $(n,k)$-descent monomials}
consists of all monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ of the form
\begin{equation}
b_g \cdot x_{\pi_1}^{r i_1} \cdots x_{\pi_{n-k}}^{r i_{n-k}},
\end{equation}
where $g \in G_n$ satisfies
${\mathrm {des}}(g) < k$ and the integer sequence $(i_1, \dots, i_{n-k})$ satisfies
\begin{equation*}
k - {\mathrm {des}}(g) > i_1 \geq \cdots \geq i_{n-k} \geq 0.
\end{equation*}
\end{defn}
As an example, consider $(n,k,r) = (7,5,2)$ and let
$g = 2^1 5^0 6^1 1^0 3^1 4^0 7^0 \in G_7$. It follows that
${\mathrm {Des}}(g) = \{2,4\}$ so that ${\mathrm {des}}(g) = 2$ and $k - {\mathrm {des}}(g) = 3$. We have
\begin{equation*}
b_g = x_2^5 x_5^4 x_6^3 x_1^2 x_3^1,
\end{equation*}
so that Definition~\ref{gs-monomial-definition} gives rise to the following monomials in
${\mathcal{D}}_{7,5}$:
\begin{center}
$\begin{array}{ccc}
x_2^5 x_5^4 x_6^3 x_1^2 x_3^1, &
x_2^5 x_5^4 x_6^3 x_1^2 x_3^1 \cdot x_2^2, &
x_2^5 x_5^4 x_6^3 x_1^2 x_3^1 \cdot x_2^4, \\ \\
x_2^5 x_5^4 x_6^3 x_1^2 x_3^1 \cdot x_2^2 x_5^2, &
x_2^5 x_5^4 x_6^3 x_1^2 x_3^1 \cdot x_2^4 x_5^2, &
x_2^5 x_5^4 x_6^3 x_1^2 x_3^1 \cdot x_2^4 x_5^4.
\end{array}$
\end{center}
By considering the possibilities for the sequence $(i_1 \geq \cdots \geq i_{n-k})$, we see that
\begin{equation}
|{\mathcal{D}}_{n,k}| \leq \sum_{g \in G_n} {n-{\mathrm {des}}(g)-1 \choose n-k} =
\sum_{g \in G_n} {{\mathrm {asc}}(g) \choose n-k}
\end{equation}
(where we have an inequality because {\em a priori} two monomials produced by
Definition~\ref{gs-monomial-definition} for different choices of $g$ could coincide).
If we consider an `ascent-starred' model for elements of ${\mathcal{OP}}_{n,k}$, e.g.
\begin{equation*}
2^1 _*5^1_*1^0 \, \, 6^3 \, \, 4^2_* 3^1 \in {\mathcal{OP}}_{6,3},
\end{equation*}
we see that
\begin{equation}
\label{s-dimension-inequality}
|{\mathcal{D}}_{n,k}| \leq |{\mathcal{OP}}_{n,k}| = \dim(S_{n,k}).
\end{equation}
Our next theorem implies $|{\mathcal{D}}_{n,k}| = \dim(S_{n,k})$.
\begin{theorem}
\label{s-gs-basis-theorem}
The collection ${\mathcal{D}}_{n,k}$ of $(n,k)$-descent monomials descends to a basis of the quotient ring
$S_{n,k}$.
\end{theorem}
\begin{proof}
By Equation~\ref{s-dimension-inequality}, we need only show that ${\mathcal{D}}_{n,k}$ descends to a spanning
set of the quotient ring $S_{n,k}$. To this end, let $m = x_1^{a_1} \cdots x_n^{a_n} \in {\mathbb {C}}[{\mathbf {x}}_n]$ be a monomial.
We will show that the coset $m + J_{n,k}$ lies in the span of ${\mathcal{D}}_{n,k}$ by induction on the partial order $\prec$.
Suppose $m$ is minimal with respect to the partial order $\prec$. Let us consider the exponent sequence
$(a_1, \dots, a_n)$ of $m$. By $\prec$-minimality, we have
\begin{equation*}
(a_1, \dots, a_n) = (\underbrace{a, \dots, a}_p, \underbrace{a+1, \dots, a+1}_{n-p})
\end{equation*}
for some integers $a \geq 0$ and $0 < p \leq n$. Our analysis breaks into cases depending on the values of $a$ and $p$.
\begin{itemize}
\item
If $a \geq r$ then
$e_n({\mathbf {x}}_n^r) \mid m$, so that $m \equiv 0$ in the quotient $S_{n,k}$.
\item
If $0 \leq a < r$ and $p = n$, then $m = b_g$ where
\begin{equation*}
g = 1^a 2^a \dots n^a \in G_n.
\end{equation*}
\item
If $0 \leq a < r-1$ and $p < n$, then $m = b_g$ where
\begin{equation*}
g = (p+1)^{a+1} (p+2)^{a+1} \dots n^{a+1} 1^a 2^a \dots p^a \in G_n.
\end{equation*}
\item
If $a = r-1$ and $0 < p < n$, then $m = b_g$ where
\begin{equation*}
g = (p+1)^0 (p+2)^0 \dots n^0 1^{r-1} 2^{r-1} \dots p^{r-1} \in G_n.
\end{equation*}
\end{itemize}
We conclude that $m + J_{n,k}$ lies in the span of ${\mathcal{D}}_{n,k}$.
Now let $m = x_1^{a_1} \cdots x_n^{a_n}$ be an arbitrary monomial in ${\mathbb {C}}[{\mathbf {x}}_n]$. We inductively
assume that for any monomial $m'$ in ${\mathbb {C}}[{\mathbf {x}}_n]$ which satisfies $m' \prec m$, the coset
$m' + J_{n,k}$ lies in the span of ${\mathcal{D}}_{n,k}$. We apply the Straightening Lemma~\ref{straightening-lemma}
to $m$, which yields
\begin{equation*}
m = e_{\mu(m)}({\mathbf {x}}_n^r) \cdot b_{g(m)} + \Sigma,
\end{equation*}
where $\Sigma$ is a linear combination of monomials $m' \prec m$; by induction, the ring element $\Sigma + J_{n,k}$
lies in the span of ${\mathcal{D}}_{n,k}$.
Write $d(m) = (d_1, \dots, d_n)$ and $g(m) = (\pi_1 \dots \pi_n, c_1 \dots c_n)$.
Since $d_1 = {\mathrm {des}}(g(m))$, if ${\mathrm {des}}(g(m)) \geq k$, we would have
$x_{\pi_1}^{kr} \mid b_{g(m)}$, so that $m \equiv \Sigma$ modulo $J_{n,k}$ and $m$ lies in the span of ${\mathcal{D}}_{n,k}$.
Similarly, if $\mu(m)_1 \geq n-k+1$, then $e_{\mu(m)_1}({\mathbf {x}}_n^r) \mid (e_{\mu(m)}({\mathbf {x}}_n^r )\cdot b_{g(m)})$,
so that again $m \equiv \Sigma$ modulo $J_{n,k}$ and $m$ lies in the span of ${\mathcal{D}}_{n,k}$.
By the last paragraph, we may assume that
\begin{center}
${\mathrm {des}}(g(m)) < k$ and $\mu(m)_1 \leq n-k$.
\end{center}
We have the identity
\begin{equation}
m = b_{g(m)} \cdot x_{\pi_1}^{r \cdot \mu(m)'_1} \cdots x_{\pi_n}^{r \cdot \mu(m)'_n},
\end{equation}
where $\mu(m)'$ is the partition conjugate to $\mu(m)$. Since $\mu(m)_1 \leq n-k$, we may rewrite this identity as
\begin{equation}
m = b_{g(m)} \cdot x_{\pi_1}^{r \cdot \mu(m)'_1} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}},
\end{equation}
where the sequence $\mu(m)'_1, \dots, \mu(m)'_{n-k}$ is weakly decreasing.
If $\mu(m)'_1 < k - {\mathrm {des}}(g)$, we have $m \in {\mathcal{D}}_{n,k}$.
If $\mu(m)'_1 \geq k - {\mathrm {des}}(g)$, since $r \cdot {\mathrm {des}}(g)$ is $\leq$ the power of $x_{\pi_1}$ in $b_{g(m)}$,
we have $x_{\pi_1}^{kr} \mid m$, so that $m \equiv \Sigma$ modulo $J_{n,k}$. In either case,
we have that $m + J_{n,k}$ lies in the span of ${\mathcal{D}}_{n,k}$.
\end{proof}
| 2,646 | 83,708 |
en
|
train
|
0.173.20
|
\subsection{The rings $R_{n,k}$.}
Our aim is to expand our set of monomials ${\mathcal{D}}_{n,k}$ to a larger set of monomials ${\mathcal{ED}}_{n,k}$
(the `extended' descent monomials) which will descend to a basis for the rings $R_{n,k}$.
\begin{defn}
\label{extended-gs-definition}
Let the {\em extended $(n,k)$-descent monomials} ${\mathcal{ED}}_{n,k}$ be the set of monomials of the form
\begin{equation}
\label{extended-gs-equation}
\left( \prod_{j = 1}^z x_{\pi_j}^{kr} \right) \cdot b_{\pi_{z+1}^{c_{z+1}} \dots \pi_n^{c_n}} \cdot
\left( x_{\pi_{z+1}}^{r \cdot i_{z+1}} x_{\pi_{z+2}}^{r \cdot i_{z+2}} \cdots x_{\pi_{n-k}}^{r \cdot i_{n-k}} \right),
\end{equation}
where
\begin{itemize}
\item we have $0 \leq z \leq n-k$,
\item
$\pi_1^{c_1} \dots \pi_n^{c_n} \in G_n$ is a colored permutation whose length $n-z$ suffix
$\pi_{z+1}^{c_{z+1}} \dots \pi_n^{c_n}$ satisifes
${\mathrm {des}}(\pi_{z+1}^{c_{z+1}} \dots \pi_n^{c_n}) < k$, and
\item we have
\begin{equation*}
k - {\mathrm {des}}(\pi_{z+1}^{c_{z+1}} \dots \pi_n^{c_n}) > i_{z+1} \geq i_{z+2} \geq \cdots \geq i_{n-k} \geq 0.
\end{equation*}
\end{itemize}
We also set ${\mathcal{ED}}_{n,0} := \{1\}$.
\end{defn}
As an example of Definition~\ref{extended-gs-definition}, let $(n,k,r) = (7,3,2)$, let $z = 2$, and consider
the group element
$5^1 1^1 2^0 6^0 7^0 4^1 3^0 \in G_7$.
We have ${\mathrm {des}}(2^0 6^0 7^0 4^1 3^0) = 1$, so that
$k - {\mathrm {des}}(2^0 6^0 7^0 4^1 3^0) = 2$. Moreover, we have
\begin{equation*}
b_{2^0 6^0 7^0 4^1 3^0} = x_2^2 x_6^2 x_7^2 x_4^1,
\end{equation*}
so that we get the following monomials in ${\mathcal{ED}}_{7,3}$:
\begin{center}
$\begin{array}{ccc}
(x_5^6 x_1^6) \cdot (x_2^2 x_6^2 x_7^2 x_4^1), &
(x_5^6 x_1^6) \cdot (x_2^2 x_6^2 x_7^2 x_4^1) \cdot (x_2^2), &
(x_5^6 x_1^6) \cdot (x_2^2 x_6^2 x_7^2 x_4^1) \cdot (x_2^2 x_6^2).
\end{array}$
\end{center}
Observe that the monomial defined in (\ref{extended-gs-equation}) depends only on the set of letters
$\{\pi_1, \dots, \pi_z\}$ contained in the length $z$ prefix $\pi_1^{c_1} \dots \pi_z^{c_z}$
of $\pi_1^{c_1} \dots \pi_n^{c_n}$.
We can therefore form a typical monomial in ${\mathcal{ED}}_{n,k}$ by choosing $0 \leq z \leq n-k$, then choosing a set
$Z \subseteq [n]$ with $|Z| = z$, then forming a typical element of ${\mathcal{D}}_{n-z,k}$ on the variable set
$\{x_j \,:\, j \in [n] - Z\}$, and finally multiplying by the product $\prod_{j \in Z} x_j^{kr}$.
By Theorem~\ref{s-gs-basis-theorem}, there are $|{\mathcal{OP}}_{n-z,k}|$ monomials in ${\mathcal{D}}_{n-z,k}$, and all
of the exponents in these monomials are $< kr$. It follows that
\begin{equation}
|{\mathcal{ED}}_{n,k}| = \sum_{z = 0}^{n-k} {n \choose z} |{\mathcal{D}}_{n-z,k}| = \sum_{z = 0}^{n-k} {n \choose z} |{\mathcal{OP}}_{n-z,k}|
= |{\mathcal{F}}_{n,k}| = \dim(R_{n,k}).
\end{equation}
We will show ${\mathcal{ED}}_{n,k}$ descends to a spanning set of $R_{n,k}$, and hence descends to a basis
of $R_{n,k}$.
\begin{theorem}
\label{r-gs-basis-theorem}
The set ${\mathcal{ED}}_{n,k}$
of extended $(n,k)$-descent monomials descends to a basis of $R_{n,k}$.
\end{theorem}
\begin{proof}
Let $m = x_1^{a_1} \cdots x_n^{a_n}$ be a monomial in ${\mathbb {C}}[{\mathbf {x}}_n]$. We argue that the coset
$m + I_{n,k} \in R_{n,k}$ lies in the span of ${\mathcal{ED}}_{n,k}$.
Suppose first that $m$ is minimal with respect to $\prec$. The exponent sequence $(a_1, \dots, a_n)$
has the form
\begin{equation*}
(a_1, \dots, a_n) = (\underbrace{a, \dots, a}_p, \underbrace{a+1, \dots, a+1}_{n-p})
\end{equation*}
for some $a \geq 0$ and $0 < p \leq n$.
The same analysis as in the proof of Theorem~\ref{s-gs-basis-theorem} implies that $m \equiv 0$ (mod $I_{n,k})$
or $m \in {\mathcal{D}}_{n,k} \subseteq {\mathcal{ED}}_{n,k}$.
Now let $m = x_1^{a_1} \cdots x_n^{a_n} \in {\mathbb {C}}[{\mathbf {x}}_n]$ be an arbitrary monomial and form
the sequence $d(m) = (d_1, \dots, d_n)$ and the colored permutation $g(m) = \pi_1^{c_1} \dots \pi_n^{c_n}$.
Apply the
Straightening Lemma~\ref{straightening-lemma} to write
\begin{equation}
m = e_{\mu(m)} ({\mathbf {x}}_n^r) \cdot b_{g(m)} + \Sigma,
\end{equation}
where $\Sigma$ is a linear combination of monomials $m' \in {\mathbb {C}}[{\mathbf {x}}_n]$ with $m' \prec m$.
We inductively assume that the ring element $\Sigma + I_{n,k}$ lies in the span of ${\mathcal{ED}}_{n,k}$.
If $\mu(m)_1 \geq n-k+1$, then $m \equiv \Sigma$ (mod $I_{n,k}$), so that $m + I_{n,k}$ lies in the
span of ${\mathcal{ED}}_{n,k}$. If ${\mathrm {des}}(g(m)) > k+1$, then $x_{\pi_1}^{(k+1)r} \mid b_{g(m)}$, so
that again $m \equiv \Sigma$ (mod $I_{n,k}$) and $m + I_{n,k}$ lies in the span of ${\mathcal{ED}}_{n,k}$.
By the last paragraph, we may assume
\begin{center}
$\mu(m)_1 \leq n-k$ and ${\mathrm {des}}(g(m)) \leq k$.
\end{center}
Our analysis breaks up into two cases depending on whether ${\mathrm {des}}(g(m)) < k$ or ${\mathrm {des}}(g(m)) = k$.
{\bf Case 1:} {\em $\mu(m)_1 \leq n-k$ and ${\mathrm {des}}(g(m)) < k$.}
If any element in the exponent sequence $(a_1, \dots, a_n)$ of $m$ is $> kr$, then $m \equiv 0$ (mod $I_{n,k}$).
We may therefore assume $a_j \leq kr$ for all $j$.
Since we have $\mu(m)_1 \leq n-k$, we have the identity
\begin{equation}
m = b_{g(m)} \cdot x_{\pi_1}^{r \cdot \mu(m)'_1} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}}.
\end{equation}
If $\mu(m)'_1 < k - {\mathrm {des}}(g(m))$, we have
$m \in {\mathcal{D}}_{n,k} \subseteq {\mathcal{ED}}_{n,k}$. If $\mu(m)_1' > k - {\mathrm {des}}(g(m))$, we have
$x_{\pi_1}^{(k+1) \cdot r} \mid m$, which contradicts $a_{\pi_1} \leq kr$.
By the last paragraph, we may assume $\mu(m)'_1 = k - {\mathrm {des}}(g(m))$. Since every term in
the weakly decreasing sequence
$(a_{\pi_1}, \dots, a_{\pi_n})$ is $\leq kr$, there exists an index $1 \leq z \leq n$ such that
$(a_{\pi_1}, \dots, a_{\pi_n}) = (kr, \dots, kr, a_{\pi_{z+1}}, \dots, a_{\pi_n})$,
where $a_{\pi_{z+1}} < kr$. Since every exponent in $b_{g(m)}$ is $< kr$, we in fact have
$1 \leq z \leq n-k$.
Let $\overline{g}$ be the partial colored permutation
$\overline{g} := \pi_{z+1}^{c_{z+1}} \dots \pi_n^{c_n}$.
Applying Observation~\ref{truncation-observation}, we have
\begin{align}
m &= b_{g(m)} \cdot x_{\pi_1}^{r \cdot \mu(m)'_1} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}} \\
&= \left( \prod_{j = 1}^z x_{\pi_j}^{kr} \right) \cdot b_{\overline{g}}
\cdot x_{\pi_{z+1}}^{r \cdot \mu(m)'_{z+1}} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}},
\end{align}
for $1 \leq z \leq n-k$. The monomial
$b_{\overline{g}} \cdot x_{\pi_{z+1}}^{r \cdot \mu(m)'_{z+1}} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}}$
only involves the variables $x_{\pi_{z+1}}, \dots, x_{\pi_n}$, and every exponent in this product is
$< kr$. If $\mu(m)'_{z+1} \geq k - {\mathrm {des}}(\overline{g})$, we would have the divisibility
$x_{\pi_{z+1}}^{kr} \mid
b_{\overline{g}} \cdot x_{\pi_{z+1}}^{r \cdot \mu(m)'_{z+1}} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}}$,
which is a contradiction.
It follows that $\mu(m)'_{z+1} < k - {\mathrm {des}}(\overline{g})$, which implies that $m \in {\mathcal{ED}}_{n,k}$.
We conclude that the coset $m + I_{n,k}$ lies in the span of ${\mathcal{ED}}_{n,k}$, which completes this case.
{\bf Case 2:} {\em $\mu(m)_1 \leq n-k$ and ${\mathrm {des}}(g(m)) = k$.}
As in the previous case, we may assume that every exponent appearing in the monomial $m$ is $\leq kr$.
We again write
\begin{equation}
m = b_{g(m)} \cdot x_{\pi_1}^{r \cdot \mu(m)'_1} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}}
\end{equation}
and have $(a_{\pi_1} \geq \cdots \geq a_{\pi_n}) = (kr, \dots, kr, a_{\pi_{z+1}}, \dots, a_{\pi_n})$
for some $1 \leq z \leq n-k$. Define the partial colored permutation
$\overline{g} := \pi_{z+1}^{c_{z+1}} \dots \pi_n^{c_n}$.
Since the exponent of $x_{\pi_{z+1}}$ in $m$
is $\geq r \cdot {\mathrm {des}}(\overline{g})$, we have ${\mathrm {des}}(\overline{g}) < k$. If $\mu(m)'_{z+1} \geq k - {\mathrm {des}}(\overline{g})$,
the exponent of $x_{\pi_{z+1}}$ in $m$ would be $\geq kr$, so we must have
$\mu(m)'_{z+1} < k - {\mathrm {des}}(\overline{g})$.
Using Observation~\ref{truncation-observation} to
write
\begin{align}
m &= b_{g(m)} \cdot x_{\pi_1}^{r \cdot \mu(m)'_1} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}} \\
&= \left( \prod_{j = 1}^z x_{\pi_j}^{kr} \right) \cdot b_{\overline{g}}
\cdot x_{\pi_{z+1}}^{r \cdot \mu(m)'_{z+1}} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}},
\end{align}
we see that $m \in {\mathcal{ED}}_{n,k}$.
\end{proof}
| 3,675 | 83,708 |
en
|
train
|
0.173.21
|
The following lemma involving expansions of monomials $m$ into the
${\mathcal{ED}}_{n,k}$ basis of $R_{n,k}$ will be useful in the next section. For $0 \leq z \leq n-z$, let
${\mathcal{ED}}_{n,k}(z)$ be the subset of monomials in ${\mathcal{ED}}_{n,k}$ which contain exactly $z$ variables with power
$kr$. We get a stratification
\begin{equation}
{\mathcal{ED}}_{n,k} = {\mathcal{ED}}_{n,k}(0) \uplus {\mathcal{ED}}_{n,k}(1) \uplus \cdots \uplus {\mathcal{ED}}_{n,k}(n-k).
\end{equation}
For convenience, we set ${\mathcal{ED}}_{n,k}(z) = \varnothing$ for $z > n-k$.
\begin{lemma}
\label{zero-stability-lemma}
Let $(a_1, \dots, a_n)$ satisfy $0 \leq a_i \leq kr$ for all $i$, let
$m = x_1^{a_1} \cdots x_n^{a_n} \in {\mathbb {C}}[{\mathbf {x}}_n]$ be the corresponding monomial, and let
$z := | \{1 \leq i \leq n \,:\, a_i = kr \} |$. The expansion of $m + I_{n,k}$ in the basis ${\mathcal{ED}}_{n,k}$ of $R_{n,k}$
only involves terms in
${\mathcal{ED}}_{n,k}(0) \uplus {\mathcal{ED}}_{n,k}(1) \uplus \cdots \uplus {\mathcal{ED}}_{n,k}(z)$.
\end{lemma}
\begin{proof}
Applying the Straightening Lemma~\ref{straightening-lemma} to $m$, we get
\begin{equation}
m = e_{\mu(m)}({\mathbf {x}}_n^r) \cdot b_{g(m)} + \Sigma,
\end{equation}
where $\Sigma$ is a linear combination of monomials $m'$ in ${\mathbb {C}}[{\mathbf {x}}_n]$ which satisfy $m' \prec m$.
The proof of Theorem~\ref{r-gs-basis-theorem} shows that either
\begin{itemize}
\item the monomial $m$ is an element of ${\mathcal{ED}}_{n,k}$, and hence an element of ${\mathcal{ED}}_{n,k}(z)$, or
\item we have $m \equiv \Sigma$ (mod $I_{n,k}$).
\end{itemize}
If the first bullet holds, we are done. We may therefore assume that $m \equiv \Sigma$ (mod $I_{n,k}$).
Let $m' = x_1^{a'_1} \cdots x_n^{a'_n}$ be a monomial
appearing in $\Sigma$.
The dominance relation $\lambda(m') \leq_{dom} \lambda(m)$ implies
$| \{ 1 \leq i \leq n \,:\, a'_i = kr \} | \leq z$. We may therefore apply the logic of the last paragraph to each such
monomial $m'$, and iterate.
\end{proof}
| 783 | 83,708 |
en
|
train
|
0.173.22
|
\section{Frobenius series}
\label{Frobenius}
In this section we will determine the graded isomorphism types of the rings $R_{n,k}$ and $S_{n,k}$.
When $r = 1$, this was carried out for the rings $S_{n,k}$ in \cite[Sec. 6]{HRS}.
It turns out that the methods developed in \cite[Sec. 6]{HRS} generalize fairly readily to the $S$ rings, but not
the $R$ rings. Our approach will be to describe the $R$ rings in terms of the $S$ rings, and then
describe the isomorphism type of the $S$ rings.
\subsection{Relating $R$ and $S$ }
In this section, we describe the graded isomorphism type of $R_{n,k}$ in terms of the rings
$S_{n,k}$. The result here is as follows.
\begin{proposition}
\label{r-to-s-reduction}
We have an isomorphism of graded $G_n$-modules
\begin{equation}
R_{n,k} \cong \bigoplus_{z = 0}^{n-k} {\mathrm {Ind}}_{G_{(n-z,z)}}^{G_n}(S_{n-z,k}^r \otimes {\mathbb {C}}_{krz}).
\end{equation}
Here ${\mathbb {C}}_{krz}$ is a copy of the trivial $1$-dimensional representation of $G_z$ sitting in degree $krz$.
Equivalently, we have the identity
\begin{equation}
{\mathrm {grFrob}}(R_{n,k}; q) = \sum_{z = 0}^{n-k} q^{krz} \bm{s}_{(\varnothing, \dots, \varnothing, (z))}({\mathbf {x}})
\cdot {\mathrm {grFrob}}(S_{n-z,k}^r; q).
\end{equation}
\end{proposition}
\begin{proof}
For $0 \leq z \leq n-k$, let $R_{n,k}(z)$ be the subspace of $R_{n,k}$ given by
\begin{equation}
R_{n,k}(z) := \mathrm{span}_{{\mathbb {C}}} \{ x_1^{a_1} \cdots x_n^{a_n} + I_{n,k} \,:\,
\text{$0 \leq a_i \leq kr$ and at most $z$ of $a_1, \dots, a_n$ equal $kr$} \}.
\end{equation}
It is clear that $R_{n,k}(z)$ is graded and stable under the action of $G_n$. We also have a filtration
\begin{equation}
R_{n,k}(0) \subseteq R_{n,k}(1) \subseteq \cdots \subseteq R_{n,k}(n-k) = R_{n,k}.
\end{equation}
It follows that there is an isomorphism of graded $G_n$-modules
\begin{equation}
R_{n,k} \cong Q_{n,k}^r(0) \oplus Q_{n,k}^r(1) \oplus \cdots \oplus Q_{n,k}^r(n-k),
\end{equation}
where $Q_{n,k}^r(z) := R_{n,k}(z)/R_{n,k}(z-1)$.
Consider the stratification ${\mathcal{ED}}_{n,k} = {\mathcal{ED}}_{n,k}(0) \uplus {\mathcal{ED}}_{n,k}(1) \uplus \cdots \uplus {\mathcal{ED}}_{n,k}(n-k)$
of the basis ${\mathcal{ED}}_{n,k}$ of $R_{n,k}$.
The containment ${\mathcal{ED}}_{n,k}(z') \subseteq R_{n,k}(z)$ for $z' \leq z$ implies
\begin{equation}
\dim(R_{n,k}(z)) \geq | {\mathcal{ED}}_{n,k}(0)| + |{\mathcal{ED}}_{n,k}(1)| + \cdots + |{\mathcal{ED}}_{n,k}(z)|.
\end{equation}
On the other hand, Lemma~\ref{zero-stability-lemma} implies that $R_{n,k}(z)$ is spanned by
(the image of the monomials in)
$\biguplus_{z' = 0}^z {\mathcal{ED}}_{n,k}(z')$.
It follows that
\begin{equation}
\dim(R_{n,k}(z)) = | {\mathcal{ED}}_{n,k}(0)| + |{\mathcal{ED}}_{n,k}(1)| + \cdots + |{\mathcal{ED}}_{n,k}(z)|.
\end{equation}
and $\biguplus_{z' = 0}^z {\mathcal{ED}}_{n,k}(z')$ descends to a basis of $R_{n,k}(z)$.
Consequently, the set ${\mathcal{ED}}_{n,k}(z)$ descends to a basis for $Q_{n,k}^r(z)$.
Fix $0 \leq z \leq n-k$.
It follows from the definition of
${\mathcal{ED}}_{n,k}(z)$ that
\begin{equation}
\dim(Q_{n,k}^r(z)) = |{\mathcal{ED}}_{n,k}(z)| = {n \choose z} \cdot |{\mathcal{OP}}_{n-z,k}| = {n \choose z} \cdot \dim(S_{n,k}),
\end{equation}
which coincides with the dimension of
${\mathrm {Ind}}_{G_{(n-z,z)}}^{G_n}(S_{n-z,k}^r \otimes {\mathbb {C}}_{krz})$. We claim that we have
an isomorphism of graded $G_n$-modules
\begin{equation}
\label{main-module-isomorphism}
Q_{n,k}^r(z) \cong {\mathrm {Ind}}_{G_{(n-z,z)}}^{G_n}(S_{n-z,k}^r \otimes {\mathbb {C}}_{krz}).
\end{equation}
In order to prove the isomorphism (\ref{main-module-isomorphism}),
for any $T \subseteq [n]$, let $G_{[n] - T}$ be the group of $r$-colored permutations on the index set $[n] - T$ and
let $S_{n-z,k}(T)$ be the module $S_{n-z,k}$
in the variable set $\{x_j \,:\, j \in T\}$.
Any group element $g \in G_{[n] - T}$ acts trivially on the product
$\prod_{j \notin T} x_j^{kr}$.
We may therefore interpret the induction on the
right-hand side of (\ref{main-module-isomorphism}) as
\begin{equation}
{\mathrm {Ind}}_{G_{(z,n-z)}}^{G_n}(S_{n-z,k} \otimes {\mathbb {C}}_{krz}) \cong
\bigoplus_{|T| = n-z} S_{n-z,k}(T) \otimes \mathrm{span} \left\{ \prod_{j \notin T} x_j^{kr} \right\},
\end{equation}
which reduces our task to proving
\begin{equation}
\label{modified-module-isomorphism}
Q_{n,k}^r(z) \cong \bigoplus_{|T| = n-z} S_{n-z,k}(T) \otimes \mathrm{span} \left\{ \prod_{j \notin T} x_j^{kr} \right\}.
\end{equation}
The set of monomials ${\mathcal{EGS}}_{n,k}(z)$ in ${\mathbb {C}}[{\mathbf {x}}_n]$ descends to a vector space basis of the
graded modules appearing on either side of
(\ref{modified-module-isomorphism}); the corresponding identification of cosets
gives rise to an isomorphism
\begin{equation}
\varphi: Q_{n,k}^r(z) \rightarrow
\bigoplus_{|T| = n-z} S_{n-z,k}^r(T) \otimes \mathrm{span} \left\{ \prod_{j \notin T} x_j^{kr} \right\}.
\end{equation}
of graded vector spaces.
It is clear that $\varphi$ commutes with the action of the diagonal subgroup
${\mathbb {Z}}_r \times \cdots \times {\mathbb {Z}}_r \subseteq G_n$; we need only show that $\varphi$ commutes with the action
of ${\mathfrak{S}}_n$.
The proof that the map $\varphi$ commutes with the action of ${\mathfrak{S}}_n$ uses straightening.
Let $m = x_1^{a_1} \cdots x_n^{a_n} \in {\mathcal{ED}}_{n,k}(z)$ be a typical
basis element and let $\pi.m = x_{\pi_1}^{a_1} \cdots x_{\pi_n}^{a_n}$ be the image of $m$ under
a typical permutation $\pi \in {\mathfrak{S}}_n$.
If $\pi.m \in {\mathcal{ED}}_{n,k}(z)$ the definition of $\varphi$ yields $\varphi(\pi.m) = \pi.\varphi(m)$.
If
$\pi.m \notin {\mathcal{ED}}_{n,k}(z)$,
by Lemma~\ref{straightening-lemma} we can write
$\pi.m = e_{\mu(\pi.m)}({\mathbf {x}}_n^r) \cdot b_{g(\pi.m)} + \Sigma$, where $\Sigma$ is a linear
combination of monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ which are $\prec \pi.m$.
As in the proof of Lemma~\ref{zero-stability-lemma}, since $m \in {\mathcal{ED}}_{n,k}(z)$ but
$\pi.m \notin {\mathcal{ED}}_{n,k}(z)$, we know that
$\pi.m \equiv \Sigma$ in the modules on either side of Equation~\ref{modified-module-isomorphism}.
Iterating this procedure, we see that $\pi.m$ has the same expansion into the bases induced from
${\mathcal{ED}}_{n,k}(z)$ on either side of Equation~\ref{modified-module-isomorphism}.
This proves that the map $\varphi$ is ${\mathfrak{S}}_n$-equivariant, so that
$\varphi$ is an isomorphism of graded $G_n$-modules.
\end{proof}
| 2,555 | 83,708 |
en
|
train
|
0.173.23
|
\subsection{The rings $S_{n,k,s}$}
By Proposition~\ref{r-to-s-reduction}, the graded isomorphism type of $R_{n,k}$ is
determined by the graded isomorphism type of $S_{n,k}$. The remainder
of this section will focus on the rings $S_{n,k}$.
As in \cite[Sec. 6]{HRS}, to determine the graded isomorphism type of $S_{n,k}$
we will introduce a more general class of quotients.
\begin{defn}
Let $n, k, s$ be positive integers with $n \geq k \geq s$.
Define $J_{n,k,s} \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ to be the ideal
\begin{equation*}
J_{n,k,s} := \langle x_1^{kr}, \dots , x_n^{kr}, e_n({\mathbf {x}}_n^r), e_{n-1}({\mathbf {x}}_n^r), \dots, e_{n-s+1}({\mathbf {x}}_n^r) \rangle.
\end{equation*}
Let $S_{n,k,s} := {\mathbb {C}}[{\mathbf {x}}_n]/J_{n,k,s}$ be the corresponding quotient ring.
\end{defn}
When $s = k$ we have $J_{n,k,k} = J_{n,k}$, so that $S_{n,k,k} = S_{n,k}$.
Our aim for the remainder of this section is to build a combinatorial model for the quotient
$S_{n,k,s}$ using the point orbit technique of Section~\ref{Hilbert}.
To this end, for $n \geq k \geq s$ let ${\mathcal{OP}}_{n,k,s}$ denote the collection of $r$-colored $k$-block
ordered set partitions $\sigma = (B_1 \mid \cdots \mid B_k)$ of $[n + (k-s)]$ such that,
for $1 \leq i \leq k-s$, we have $n+i \in B_{s+i}$ and $n+i$ has color $0$.
For example, we have
\begin{equation*}
( 2^0 3^2 \mid 1^2 6^0 \mid {\bf 7^0} \mid 5^1 7^2 {\bf 8^0} \mid 4^1 {\bf 9^0} ) \in {\mathcal{OP}}^3_{6,5,2}.
\end{equation*}
Given $\sigma \in {\mathcal{OP}}_{n,k,s}$, we will refer to the letters $n+1, n+2, \dots, n+(k-s)$ as {\em big};
the remaining letters will be called {\em small}.
The group $G_n$ acts on ${\mathcal{OP}}_{n,k,s}$ by acting on the small letters.
We model this action with a point set as follows.
\begin{defn}
Fix positive real numbers
$0 < \alpha_1 < \cdots < \alpha_k$.
Let $Z_{n,k,s} \subseteq {\mathbb {C}}^{n+(k-s)}$ be the collection of
points $(z_1, \dots, z_n, z_{n+1}, \dots, z_{n+k-s})$ such that
\begin{itemize}
\item we have $z_i \in \{ \zeta^c \alpha_j \,:\, 0 \leq c \leq r-1, \, \, 1 \leq j \leq k\}$ for all $1 \leq i \leq n + (k-s)$,
\item we have $\{\alpha_1, \dots, \alpha_k\} = \{|z_1|, \dots, |z_n| \}$, and
\item we have $z_{n+i} = \alpha_{s+i}$ for all $1 \leq i \leq k-s$.
\end{itemize}
\end{defn}
It is evident that the point set $Z_{n,k,s}$ is stable under the action of $G_n$ on the first $n$
coordinates of ${\mathbb {C}}^{n + (k-s)}$ and that $Z_{n,k,s}$ is isomorphic to the action of
$G_n$ on ${\mathcal{OP}}_{n,k,s}$.
Let ${\mathbf {I}}(Z_{n,k,s}) \subseteq {\mathbb {C}}[{\mathbf {x}}_{n+(k-s)}]$ be the ideal of polynomials which vanish on $Y_{n,k,s}$ and let
${\mathbf {T}}(Y_{n,k,s}) \subseteq {\mathbb {C}}[{\mathbf {x}}_{n+(k-s)}]$ be the corresponding top component ideal.
Since $x_{n+i} - \alpha_{n+i} \in {\mathbf {I}}(Y_{n,k,s})$ for all $1 \leq i \leq k-s$, we have $x_{n+i} \in {\mathbf {T}}(Y_{n,k,s})$.
Let $\varepsilon: {\mathbb {C}}[{\mathbf {x}}_{n+(k-s)}] \twoheadrightarrow {\mathbb {C}}[{\mathbf {x}}_n]$ be the map which evaluates $x_{n+i} = 0$ for all
$1 \leq i \leq k-s$ and let $T_{n,k,s} := \varepsilon({\mathbf {T}}(Y_{n,k,s}))$ be the image of ${\mathbf {T}}(Y_{n,k,s})$ under
$\varepsilon$.
Then $T_{n,k,s}$ is an ideal in ${\mathbb {C}}[{\mathbf {x}}_n]$ and we have an identification of
$G_n$-modules
\begin{equation*}
{\mathbb {C}}[{\mathcal{OP}}_{n,k,s}] \cong {\mathbb {C}}[{\mathbf {x}}_{n+(k-s)}]/{\mathbf {I}}(Y_{n,k,s}) \cong
{\mathbb {C}}[{\mathbf {x}}_{n+(k-s)}]/{\mathbf {T}}(Y_{n,k,s}) \cong {\mathbb {C}}[{\mathbf {x}}_n]/T_{n,k,s}.
\end{equation*}
It will develop that $J_{n,k,s} = T_{n,k,s}$. We can generalize
Lemma~\ref{i-contained-in-t} to prove one containment right away.
\begin{lemma}
\label{j-contained-in-t-generalized}
We have $J_{n,k,s} \subseteq T_{n,k,s}$.
\end{lemma}
\begin{proof}
We show that every generator of $J_{n,k,s}$ is contained in $T_{n,k,s}$.
For $1 \leq i \leq n$ we have
$\prod_{j = 1}^r \prod_{c = 0}^{r-1} (x_i - \zeta^c \alpha_i) \in {\mathbf {I}}(Y_{n,k,s})$, so that
$x_i^{kr} \in T_{n,k,s}$.
The proof of Lemma~\ref{i-contained-in-t} shows that $e_j({\mathbf {x}}_{n+(k-s)}^r) \in {\mathbf {T}}(Y_{n,k,s})$
for all $j \geq n-s+1$. Applying the evaluation map $\varepsilon$ gives
$\varepsilon: e_j({\mathbf {x}}_{n+(k-s)}^r) \mapsto e_j({\mathbf {x}}_n^r) \in T_{n,k,s}$.
\end{proof}
Proving the equality $J_{n,k,s} = T_{n,k,s}$ will involve a dimension count. To facilitate this,
let us identify some terms in the initial ideal of $I_{n,k,s}$.
The following is a generalization of Lemma~\ref{skip-leading-terms}; its proof is left to the reader.
\begin{lemma}
\label{skip-leading-terms}
Let $<$ be the lexicographic term order on monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ and let
${\mathrm {in}}_<(J_{n,k,s})$ be the initial ideal of $J_{n,k,s}$. We have
\begin{itemize}
\item $x_i^{kr} \in {\mathrm {in}}_<(J_{n,k,s})$ for $1 \leq i \leq n$, and
\item ${\mathbf {x}}(S)^r \in {\mathrm {in}}_<(J_{n,k,s})$ for all $S \subseteq [n]$ with $|S| = n-s+1$.
\end{itemize}
\end{lemma}
Lemma~\ref{skip-leading-terms} motivates the following generalization of strongly
$(n,k)$-nonskip monomials.
\begin{defn}
Let ${\mathbb{N}}N_{n,k,s}$ be the collection of monomials $m \in {\mathbb {C}}[{\mathbf {x}}_n]$ such that
\begin{itemize}
\item $x_i^{kr} \nmid m$ for all $1 \leq i \leq m$, and
\item ${\mathbf {x}}(S)^r \nmid m$ for all $S \subseteq [n]$ with $|S| = n-s+1$.
\end{itemize}
\end{defn}
By Lemma~\ref{skip-leading-terms}, the set ${\mathbb{N}}N_{n,k,s}$ contains the standard monomial basis
of $S_{n,k,s}$; we will prove that these two sets of monomials coincide.
Let us first observe a relationship between the monomials in ${\mathbb{N}}N_{n,k,s}$ and those
in ${\mathbb{N}}N_{n+(k-s),k}$.
\begin{lemma}
\label{nonskip-monomial-factor}
If $x_1^{a_1} \cdots x_n^{a_n} x_{n+1}^{a_{n+1}} \cdots x_{n+(k-s)}^{a_{n+(k-s)}} \in {\mathbb{N}}N_{n+(k-s),k}$,
then $x_1^{a_1} \cdots x_n^{a_n} \in {\mathbb{N}}N_{n,k,s}$.
Conversely, if $x_1^{a_1} \cdots x_n^{a_n} \in {\mathbb{N}}N_{n,k,s}$ and
$0 \leq a_{n+1} < a_{n+2} < \cdots < a_{n+(k-s)} < kr$ satisfy
\begin{equation*}
a_{n+1} \equiv a_{n+2} \equiv \cdots \equiv a_{n+(k-s)} \equiv i \text{ (mod $r$)}
\end{equation*}
for some $0 \leq i \leq r-1$ , then
$x_1^{a_1} \cdots x_n^{a_n} x_{n+1}^{a_{n+1}} \cdots x_{n+(k-s)}^{a_{n+(k-s)}} \in {\mathbb{N}}N_{n+(k-s),k}$.
\end{lemma}
\begin{proof}
The first statement is clear from the definitions of ${\mathbb{N}}N_{n+(k-s),k}$ and ${\mathbb{N}}N_{n,k,s}$. For the second statement,
let $m' := x_1^{a_1} \cdots x_n^{a_n} \in {\mathbb{N}}N_{n,k,s}$ and let $0 \leq a_{n+1} < a_{n+2} < \cdots < a_{n+(k-s)} < kr$
be as in the statement of the lemma.
We argue that $m := x_1^{a_1} \cdots x_n^{a_n} x_{n+1}^{a_{n+1}} \cdots x_{n+(k-s)}^{a_{n+(k-s)}} \in {\mathbb{N}}N_{n+(k-s),k}$.
Since $m' \in {\mathbb{N}}N_{n,k,s}$, we know that $x_i^{kr} \nmid m$ for $1 \leq i \leq n + (k-s)$. Let $S \subseteq [n + (k-s)]$
satisfy $|S| = n+(k-s)$. We need to show ${\mathbf {x}}(S)^r \nmid m$.
If $S \subseteq [n]$, then ${\mathbf {x}}(S)^r \nmid m$ because
${\mathbf {x}}(S)^r \nmid m'$. On the other hand, if $n + i \in S$ for some $1 \leq i \leq k-r$,
the power $p_{n+i}$ of $x_{n+i}$
in ${\mathbf {x}}(S)^r$ is $\geq r \cdot (s+i)$. However, our assumptions on $(a_{n+1}, a_{n+2}, \dots, a_{n+(k-s)})$ force
$a_{n+i} < r \cdot (k - (s-i)) \leq r \cdot (s+i)$, which implies ${\mathbf {x}}(S)^r \nmid m$.
\end{proof}
We use the map $\Psi$ from Section~\ref{Hilbert} to count ${\mathbb{N}}N_{n,k,s}$.
\begin{lemma}
\label{size-of-n}
We have $|{\mathbb{N}}N_{n,k,s}| = |{\mathcal{OP}}_{n,k,s}|$.
\end{lemma}
\begin{proof}
Consider the bijection $\Psi: {\mathcal{OP}}_{n+(k-s),k} \rightarrow {\mathbb{N}}N_{n+(k-s),k}$ from Section~\ref{Hilbert}.
We have ${\mathcal{OP}}_{n,k,s} \subseteq {\mathcal{OP}}_{n+(k-s),k}$. We leave it for the reader to check that
\begin{equation*}
\Psi({\mathcal{OP}}_{n,k,s}) = {\mathbb{N}}N'_{n,k,s},
\end{equation*}
where ${\mathbb{N}}N'_{n,k,s}$ consists of those monomials
$x_1^{a_1} \cdots x_{n}^{a_n} x_{n+1}^{a_{n+1}} \cdots x_{n+(k-s)}^{a_{n+(k-s)}} \in {\mathbb{N}}N_{n+(k-s),k}$
which satisfy
\begin{equation*}
(a_{n+1}, a_{n+2}, \dots, a_{n+(k-s)}) = (rs + (r-1), r(s+1) + (r-1), \dots, r(k-1) + (r-1)).
\end{equation*}
(The $+(r-1)$ terms come from the fact that the letters $n+1, \dots, n+(k-s)$ all have color $0$ and
$\Psi$ involves a {\em complementary} color contribution.)
Lemma~\ref{nonskip-monomial-factor} applies to show $|{\mathbb{N}}N'_{n,k,s}| = |{\mathbb{N}}N_{n,k,s}|$.
\end{proof}
| 3,766 | 83,708 |
en
|
train
|
0.173.24
|
We use the map $\Psi$ from Section~\ref{Hilbert} to count ${\mathbb{N}}N_{n,k,s}$.
\begin{lemma}
\label{size-of-n}
We have $|{\mathbb{N}}N_{n,k,s}| = |{\mathcal{OP}}_{n,k,s}|$.
\end{lemma}
\begin{proof}
Consider the bijection $\Psi: {\mathcal{OP}}_{n+(k-s),k} \rightarrow {\mathbb{N}}N_{n+(k-s),k}$ from Section~\ref{Hilbert}.
We have ${\mathcal{OP}}_{n,k,s} \subseteq {\mathcal{OP}}_{n+(k-s),k}$. We leave it for the reader to check that
\begin{equation*}
\Psi({\mathcal{OP}}_{n,k,s}) = {\mathbb{N}}N'_{n,k,s},
\end{equation*}
where ${\mathbb{N}}N'_{n,k,s}$ consists of those monomials
$x_1^{a_1} \cdots x_{n}^{a_n} x_{n+1}^{a_{n+1}} \cdots x_{n+(k-s)}^{a_{n+(k-s)}} \in {\mathbb{N}}N_{n+(k-s),k}$
which satisfy
\begin{equation*}
(a_{n+1}, a_{n+2}, \dots, a_{n+(k-s)}) = (rs + (r-1), r(s+1) + (r-1), \dots, r(k-1) + (r-1)).
\end{equation*}
(The $+(r-1)$ terms come from the fact that the letters $n+1, \dots, n+(k-s)$ all have color $0$ and
$\Psi$ involves a {\em complementary} color contribution.)
Lemma~\ref{nonskip-monomial-factor} applies to show $|{\mathbb{N}}N'_{n,k,s}| = |{\mathbb{N}}N_{n,k,s}|$.
\end{proof}
We are ready to determine the ungraded isomorphism type of the $G_n$-module
$S_{n,k,s}$.
\begin{lemma}
\label{s-dimension-lemma-generalized}
We have $S_{n,k,s} \cong {\mathbb {C}}[{\mathcal{OP}}_{n,k,s}]$. In particular, we have
$\dim(S_{n,k,s}) = |{\mathcal{OP}}_{n,k,s}|$.
\end{lemma}
\begin{proof}
By Lemma~\ref{j-contained-in-t-generalized} we have $\dim(S_{n,k,s}) \geq |{\mathcal{OP}}_{n,k,s}|$.
Lemma~\ref{skip-leading-terms} and
Lemma~\ref{size-of-n} imply that the standard monomial basis of $S_{n,k,s}$ with respect to the
lexicographic term order has size $\leq |{\mathbb{N}}N_{n,k,s}| = |{\mathcal{OP}}_{n,k,s}|$, so that
$\dim(S_{n,k,s}) = |{\mathcal{OP}}_{n,k,s}|$. Lemma~\ref{j-contained-in-t-generalized} gives a
$G_n$-module surjection $S_{n,k,s} \twoheadrightarrow {\mathbb {C}}[{\mathcal{OP}}_{n,k,s}]$;
dimension counting shows that this surjection is an isomorphism.
\end{proof}
| 872 | 83,708 |
en
|
train
|
0.173.25
|
\subsection{Idempotents and $e_j({\mathbf {x}}^{(i^*)})^{\perp}$}
For $1 \leq j \leq n$ and $1 \leq i \leq r$,
we want to develop a module-theoretic analog of acting by the operator
$e_j({\mathbf {x}}^{(i^*)})^{\perp}$ on Frobenius images.
If $V$ is a $G_n$-module, acting by $e_j({\mathbf {x}}^{(i^*)})^{\perp}$ on
${\mathrm {Frob}}(V)$ will correspond to taking the image of $V$ under a certain group algebra
idempotent $\epsilon_{i,j} \in {\mathbb {C}}[G_n]$.
Let $1 \leq j \leq n$ and consider the corresponding parabolic subgroup
$G_{(n-j,j)} = G_{n-j} \times G_j$ of $G_n$.
The factor $G_j$ acts on the {\em last} $j$ letters $n-j+1, \dots, n-1, n$ of $\{1, 2, \dots, n\}$.
For $1 \leq j \leq n$ and $1 \leq i \leq r$,
let $\epsilon_{i,j}$ be the idempotent in the group algebra of $G_n$ given by
\begin{equation}
\epsilon_{i,j} := \frac{1}{r^j \cdot j!}
\sum_{g \in {\mathbb {Z}}_r \wr {\mathfrak{S}}_j} {\mathrm {sign}}(g) \cdot \overline{\chi(g)^i} \cdot g \in {\mathbb {C}}[G_n].
\end{equation}
(Recall that $\chi(g)$ is the product of the nonzero entries in the $j \times j$ monomial matrix $g$.)
The idempotent $\epsilon_{i,j}$ commutes with the action of $G_{n-j}$. In particular,
if $V$ is a $G_n$-module, then $\epsilon_{i,j} V$ is a
$G_{n-j}$-module.
The relationship between ${\mathrm {Frob}}(V)$ and ${\mathrm {Frob}}(\epsilon_{i,j}V)$ is as follows.
\begin{lemma}
\label{e-perp-on-v}
Let $V$ be a $G_n$-module, let $1 \leq j \leq n$, and let $1 \leq i \leq r$.
We have
\begin{equation}
{\mathrm {Frob}}(\epsilon_{i,j} V) = e_j({\mathbf {x}}^{(i^*)})^{\perp} {\mathrm {Frob}}(V).
\end{equation}
In particular, if $V$ is graded, we have
\begin{equation}
{\mathrm {grFrob}}(\epsilon_{i,j} V; q) = e_j({\mathbf {x}}^{(i^*)})^{\perp} {\mathrm {grFrob}}(V; q).
\end{equation}
\end{lemma}
\begin{proof}
The proof is a standard application of Frobenius reciprocity
and symmetric function theory (and can be found in \cite{GP} in the case $r = 1$).
It suffices to prove this lemma when $V$ is irreducible, so let $V = \bm{S^{\lambda}}$ for some $r$-partition
${ \bm{\lambda} } \vdash_r n$. Consider the parabolic
subgroup $G_{(n-j,j)} \subseteq G_n$. Irreducible representations
of $G_{(n-j,j)}$ have the form $\bm{S^{\mu}} \otimes \bm{S^{\nu}}$
for $\bm{\mu} \vdash_r n-j$ and $\bm{\nu} \vdash_r j$. By Frobenius reciprocity,
we have
\begin{align*}
\text{(multiplicity of $\bm{S^{\mu}} \otimes \bm{S^{\nu}}$ in
$\mathrm{Res}^{G_n}_{G_{(n-j,j)}} \bm{S^{\lambda}}$)} &=
\text{(multiplicity of $\bm{S^{\lambda}}$ in
$\mathrm{Ind}^{G_n}_{G_{(n-j,j)}} \bm{S^{\mu}} \otimes \bm{S^{\nu}}$)} \\
&=
\text{(coefficient of $\bm{s_{\lambda}(x)}$ in $\bm{s_{\mu}(x)} \cdot \bm{s_{\nu}(x)}$)}.
\end{align*}
The coefficient of $\bm{s_{\lambda}(x)}$ in the Schur expansion of $\bm{s_{\mu}(x)} \cdot \bm{s_{\nu}(x)}$ is
\begin{equation*}
\bm{c^{\lambda}_{\mu,\nu}} := c_{\mu^{(1)}, \nu^{(1)}}^{\lambda^{(1)}} \cdots c_{\mu^{(r)}, \nu^{(r)}}^{\lambda^{(r)}},
\end{equation*}
where the numbers $c_{\mu^{(1)}, \nu^{(1)}}^{\lambda^{(1)}}, \dots, c_{\mu^{(r)}, \nu^{(r)}}^{\lambda^{(r)}}$ are
Littlewood-Richardson coefficients.
By the last paragraph, we have the isomorphism of $G_{(n-j,j)}$-modules
\begin{equation}
\mathrm{Res}^{G_n}_{G_{(n-j,j)}} \bm{S^{\lambda}} \cong
\bigoplus_{\substack{ \bm{\mu} \vdash_r n-j \\ \bm{\nu} \vdash_r j}}
\bm{c_{\mu,\nu}^{\lambda}} (\bm{S^{\mu}} \otimes \bm{S^{\nu}}),
\end{equation}
which implies the isomorphism of $G_{n-j}$-modules
\begin{equation}
\epsilon_{i,j} \bm{S^{\lambda}} \cong
\bigoplus_{\substack{ \bm{\mu} \vdash_r n-j \\ \bm{\nu} \vdash_r j}}
\bm{c_{\mu,\nu}^{\lambda}} (\bm{S^{\mu}} \otimes \epsilon_{i,j} \bm{S^{\nu}}).
\end{equation}
However, since the idempotent $\epsilon_{i,j}$ projects onto the
$\bm{\nu_0} := (\varnothing, \dots, (1^j), \dots, \varnothing)$-isotypic component of any
$G_j$-module (where the nonempty
partition is in position $i$), we have
\begin{equation}
\epsilon_{i,j} \bm{S^{\nu}} = \begin{cases}
\bm{S^{\nu_0}} & \bm{\nu} = \bm{\nu_0} \\
0 & \bm{\nu} \neq \bm{\nu_0}.
\end{cases}
\end{equation}
Since $\bm{S^{\nu_0}}$ is 1-dimensional, we deduce
\begin{equation}
\epsilon_{i,j} \bm{S^{\lambda}} \cong
\bigoplus_{\bm{\mu} \vdash_r n-j}
\bm{c_{\mu, \nu_0}^{\lambda}} \bm{S^{\mu}},
\end{equation}
or
\begin{equation}
{\mathrm {Frob}}(\epsilon_{i,j} \bm{S^{\lambda}}) = \sum_{\bm{\mu} \vdash_r n-j}
\bm{c_{\mu, \nu_0}^{\lambda}} \bm{s_{\mu}}({\mathbf {x}}).
\end{equation}
To complete the proof, observe that ${\mathrm {Frob}}(S^{\bm{\nu_0}}) = e_j({\mathbf {x}}^{(i)})$ and
apply the definition of adjoint operators (together with the dualizing operation $i \mapsto i^*$
in the relevant inner product $\langle \cdot, \cdot \rangle$).
\end{proof}
We will need to consider the action of the idempotent $\epsilon_{i,j}$ on polynomials in ${\mathbb {C}}[{\mathbf {x}}_n]$.
Our basic tool is
the following lemma describing the action of $\epsilon_{i,j}$ on monomials in the variables
$x_{n-j+1}, \dots, x_n$.
\begin{lemma}
\label{last-variable-lemma}
Let $(a_{n-j+1}, \dots, a_n)$ be a length $j$ sequence of nonnegative integers and consider the corresponding
monomial $x_{n-j+1}^{a_{n-j+1}} \cdots x_n^{a_n}$. Unless the numbers $a_{n-j+1}, \dots, a_n$ are distinct
and all congruent to $-i$ modulo $r$, we have
\begin{equation}
\epsilon_{i,j} \cdot (x_{n-j+1}^{a_{n-j+1}} \cdots x_n^{a_n}) = 0.
\end{equation}
Furthermore, if $(a'_{n-j+1}, \dots, a'_n)$ is a rearrangement of $(a_{n-j+1}, \dots, a_n)$, we have
\begin{equation}
\epsilon_{i,j} \cdot (x_{n-j+1}^{a_{n-j+1}} \cdots x_n^{a_n}) =
\pm \epsilon_{i,j} \cdot (x_{n-j+1}^{a'_{n-j+1}} \cdots x_n^{a'_n}).
\end{equation}
\end{lemma}
\begin{proof}
Recall that $G_n$ acts on ${\mathbb {C}}[{\mathbf {x}}_n]$ by linear substitutions.
In particular, if $1 \leq \ell \leq n$ and $\pi \in {\mathfrak{S}}_n \subseteq G_n$, we have
$\pi.x_{\ell} = x_{\pi_{\ell}}$. Moreover, if $g = \mathrm{diag}(g_1, \dots, g_n) \in G_n$
is a diagonal matrix, we have
$g.x_{\ell} = g_{\ell}^{-1} x_{\ell}$. Using these rules, the lemma is a routine computation.
\end{proof}
| 2,483 | 83,708 |
en
|
train
|
0.173.26
|
We will need to consider the action of the idempotent $\epsilon_{i,j}$ on polynomials in ${\mathbb {C}}[{\mathbf {x}}_n]$.
Our basic tool is
the following lemma describing the action of $\epsilon_{i,j}$ on monomials in the variables
$x_{n-j+1}, \dots, x_n$.
\begin{lemma}
\label{last-variable-lemma}
Let $(a_{n-j+1}, \dots, a_n)$ be a length $j$ sequence of nonnegative integers and consider the corresponding
monomial $x_{n-j+1}^{a_{n-j+1}} \cdots x_n^{a_n}$. Unless the numbers $a_{n-j+1}, \dots, a_n$ are distinct
and all congruent to $-i$ modulo $r$, we have
\begin{equation}
\epsilon_{i,j} \cdot (x_{n-j+1}^{a_{n-j+1}} \cdots x_n^{a_n}) = 0.
\end{equation}
Furthermore, if $(a'_{n-j+1}, \dots, a'_n)$ is a rearrangement of $(a_{n-j+1}, \dots, a_n)$, we have
\begin{equation}
\epsilon_{i,j} \cdot (x_{n-j+1}^{a_{n-j+1}} \cdots x_n^{a_n}) =
\pm \epsilon_{i,j} \cdot (x_{n-j+1}^{a'_{n-j+1}} \cdots x_n^{a'_n}).
\end{equation}
\end{lemma}
\begin{proof}
Recall that $G_n$ acts on ${\mathbb {C}}[{\mathbf {x}}_n]$ by linear substitutions.
In particular, if $1 \leq \ell \leq n$ and $\pi \in {\mathfrak{S}}_n \subseteq G_n$, we have
$\pi.x_{\ell} = x_{\pi_{\ell}}$. Moreover, if $g = \mathrm{diag}(g_1, \dots, g_n) \in G_n$
is a diagonal matrix, we have
$g.x_{\ell} = g_{\ell}^{-1} x_{\ell}$. Using these rules, the lemma is a routine computation.
\end{proof}
The group $G_j$ acts on the quotient ring
$V_{n,k,j} := {\mathbb {C}}[x_{n-j+1}, \dots, x_n] / \langle x_{n-j+1}^{kr}, \dots, x_n^{kr} \rangle$. For any $1 \leq i \leq r$, let
$\epsilon_{i,j} V_{n,k,j}$ be the image of $V_{n,k,j}$ under $\epsilon_{i,j}$. Then
$\epsilon_{i,j} V_{n,k,j}$ is a graded vector space on which the idempotent
$\epsilon_{i,j}$ acts as the identity operator.
As a consequence of Lemma~\ref{last-variable-lemma}, the set of polynomials
\begin{equation}
\{ \epsilon_{i,j} \cdot (x_{n-j+1}^{a_{n-j+1}} \cdots x_n^{a_n}) \,:\,
0 \leq a_{n-j+1} < \cdots < a_n < kr, \text{ $a_{\ell} \equiv -i$ (mod $r$) for all $\ell$} \}
\end{equation}
descends to a basis for $\epsilon_{i,j} V_{n,k,j}$.
Counting the degrees of the monomials appearing in the above set,
we have the Hilbert series
\begin{equation}
{\mathrm {Hilb}}(\epsilon_{i,j} V_{n,k,j}; q) = q^{j \cdot (r-i) + r \cdot {j \choose 2}} {k \brack j}_{q^r}.
\end{equation}
The following generalization of \cite[Lem. 6.8]{HRS} uses the spaces
$\epsilon_{i,j} V_{n,k,j}$ to relate the modules $\epsilon_{i,j} S_{n,k}$ and
$S_{n-j,k,k-j}$.
\begin{lemma}
\label{tensor-isomorphism}
As graded $G_j$-modules we have
$\epsilon_{i,j} S_{n,k} \cong S_{n-j,k,k-j} \otimes \epsilon_{i,j} V_{n,k,j}$.
\end{lemma}
\begin{proof}
Write ${\mathbf {y}}_{n-j} = (y_1, \dots, y_{n-j}) = (x_1, \dots, x_{n-j})$ and
${\mathbf {z}}_j = (z_1, \dots, z_j) = (x_{n-j+1}, \dots, x_n)$, so that
${\mathbb {C}}[{\mathbf {x}}_n] = {\mathbb {C}}[{\mathbf {y}}_{n-j}, {\mathbf {z}}_j]$.
The operator $\epsilon_{i,j} \in {\mathbb {C}}[G_j]$ acts on the ${\mathbf {z}}$ variables and commutes with the ${\mathbf {y}}$ variables.
There is a natural multiplication map
\begin{equation}
\widetilde{\mu}: {\mathbb {C}}[{\mathbf {y}}_{n-j}] \otimes \epsilon_{i,j} V_{n,k,j} \rightarrow \epsilon_{i,j} {\mathbb {C}}[{\mathbf {x}}_n] / \epsilon_{i,j} J_{n,k}
\cong \epsilon_{i,j} S_{n,k}
\end{equation}
coming from the assignment $f({\mathbf {y}}_{n-j}) \otimes g({\mathbf {z}}_j) \mapsto f({\mathbf {y}}_{n-j}) g({\mathbf {z}}_j)$.
The map $\widetilde{\mu}$ commutes with the action of $G_{n-j}$ on the ${\mathbf {y}}$ variables.
We show that $\widetilde{\mu}$ descends to the desired isomorphism.
We calculate
\begin{equation}
\epsilon_{i,j}(e_d({\mathbf {y}}_{n-j}^r, {\mathbf {z}}_j^r)) = \sum_{a + b = d} e_a({\mathbf {y}}_{n-j}^r) \epsilon_{i,j}(e_b({\mathbf {z}}_j^r)) =
e_d({\mathbf {y}}_{n-j}^r)
\end{equation}
for any $d > 0$. It follows that $e_d({\mathbf {y}}_{n-j}^r) \in \epsilon_{i,j} J_{n,k}$ for all $d > n-k$.
For any $f({\mathbf {z}}_j) \in \epsilon_{i,j} V_{n,k,j}$ we have
\begin{equation}
\widetilde{\mu}(y_{\ell}^{kr} \otimes f({\mathbf {z}}_j)) = y_{\ell}^{kr} f({\mathbf {z}}_j)
= y_{\ell}^{kr} \epsilon_{i,j} (f({\mathbf {z}}_j)) = \epsilon_{i,j} (y_{\ell}^{kr} f({\mathbf {z}}_j)) \in \epsilon_{i,j} J_{n,k},
\end{equation}
where we used the fact that $\epsilon_{i,j}$ acts as the identity operator on
$\epsilon_{i,j} V_{n,k,j}$.
By the last paragraph, we have $J_{n-j,k,k-j} \otimes \epsilon_{i,j} V_{n,k,j} \subseteq \mathrm{Ker}(\widetilde{\mu})$.
The map $\widetilde{\mu}$ therefore induces a map
\begin{equation}
\mu: S_{n-j,k,k-j} \otimes \epsilon_{i,j} V_{n,k,j} \rightarrow \epsilon_{i,j} {\mathbb {C}}[{\mathbf {x}}_n]/\epsilon_{i,j} J_{n,k}
\cong \epsilon_{i,j} S_{n,k}.
\end{equation}
To determine the dimension of the target of $\mu$, consider the action of $\epsilon_{i,j}$
on ${\mathbb {C}}[{\mathcal{OP}}_{n,k}]$. Given $\sigma \in {\mathcal{OP}}_{n,k}$, we have $\epsilon_{i,j}.\sigma = 0$
if and only if two of the big letters $n-j+1, \dots, n-1, n$ lie in the same block of $\sigma$.
Moreover, if $\sigma'$ is obtained from $\sigma$ by rearranging the letters
$n-j+1, \dots, n-1, n$ and/or changing their colors, then $\epsilon_{i,j}.\sigma'$ is a scalar multiple
of $\epsilon_{i,j}.\sigma$.
By Theorem~\ref{ungraded-isomorphism-type},
the dimension of the target of $\mu$ is
\begin{equation}
\label{mu-dimension}
\dim(\epsilon_{i,j} S_{n,k}) = \dim(\epsilon_{i,j} {\mathbb {C}}[{\mathcal{OP}}_{n,k}]) = {k \choose j} \cdot |{\mathcal{OP}}_{n-j,k,k-j}|,
\end{equation}
where the binomial coefficient ${k \choose j}$ comes from deciding which of the $k$ blocks of $\sigma$
receive the $j$ big letters.
On the other hand, Lemma~\ref{s-dimension-lemma-generalized} and
the discussion after Lemma~\ref{last-variable-lemma} imply that the domain of $\mu$
also has dimension given by (\ref{mu-dimension}).
To prove that $\mu$ gives the desired isomorphism, it is therefore enough to show that $\mu$
is surjective.
To see that $\mu$ is surjective, let ${\mathbb {C}}C_{n,k,j}$ be the set
of polynomials of the form $\epsilon_{i,j} m({\mathbf {x}}_n)$, where
$m({\mathbf {x}}_n) = m({\mathbf {y}}_{n-j}) \cdot m({\mathbf {z}}_j) \in {\mathbb{N}}N_{n,k}$ has the property that
$m({\mathbf {z}}_j) = z_1^{a_1} \cdots z_j^{a_j}$ with $a_1 < \cdots < a_j$ and
$a_{\ell} \equiv -i$ (mod $r$) for all $\ell$. We claim that ${\mathbb {C}}C_{n,k,j}$ descends to a basis of
$\epsilon_{i,j} S_{n,k}$.
Since ${\mathbb{N}}N_{n,k}^r$ is a basis of $S_{n,k}$, the set $\{ \epsilon_{i,j} m({\mathbf {x}}_n) \,:\, m({\mathbf {x}}_n) \in {\mathbb{N}}N_{n,k} \}$
spans $\epsilon_{i,j} S_{n,k}$.
Let $m({\mathbf {x}}_n) = m({\mathbf {y}}_{n-j}) \cdot m({\mathbf {z}}_j) \in {\mathbb{N}}N_{n,k}$.
By Lemma~\ref{last-variable-lemma}, we have $\epsilon_{i,j} m({\mathbf {x}}_n) = 0$ unless
$m({\mathbf {z}}_j) = z_1^{a_1} \cdots z_j^{a_j}$ with $(a_1, \dots, a_j)$ distinct and
$a_{\ell} \equiv -i$ (mod $r$) for all $\ell$.
Also, if $m({\mathbf {z}}_j)' = z_1^{a_1'} \cdots z_j^{a_j'}$ for any permutation $(a_1', \dots, a_j')$
of $(a_1, \dots, a_j)$, then $\epsilon_{i,j} m({\mathbf {x}}_n) = \pm \epsilon_{i,j} m({\mathbf {y}}_{n-j}) \cdot m({\mathbf {z}}_j)'$.
It follows that ${\mathbb {C}}C_{n,k,j}$ descends to a spanning set of $\epsilon_{i,j} S_{n,k}$.
Lemmas~\ref{nonskip-monomial-factor}, \ref{size-of-n}, and
\ref{s-dimension-lemma-generalized} imply
\begin{equation}
|{\mathbb {C}}C_{n,k,j}| = {k \choose j} \cdot |{\mathcal{OP}}_{n-j,k,k-j}| = \dim(\epsilon_{i,j} S_{n,k}).
\end{equation}
It follows that ${\mathbb {C}}C_{n,k,j}$ descends to a basis of $\epsilon_{i,j} S_{n,k}$.
Consider a typical element $\epsilon_{i,j} m({\mathbf {x}}_n) = m({\mathbf {y}}_{n-j}) \cdot \epsilon_{i,j} m({\mathbf {z}}_j) \in {\mathbb {C}}C_{n,k,j}$.
We have
\begin{equation}
\mu(m({\mathbf {y}}_{n-j}) \otimes \epsilon_{i,j} m({\mathbf {z}}_j)) = m({\mathbf {y}}_{n-j}) \cdot \epsilon_{i,j} m({\mathbf {z}}_j) = \epsilon_{i,j} m({\mathbf {x}}_n),
\end{equation}
so that $\epsilon_{i,j} m({\mathbf {x}}_n)$ lies in the image of $\mu$. It follows that $\mu$ is surjective.
\end{proof}
By Lemma~\ref{tensor-isomorphism}, we have
\begin{align}
e_j({\mathbf {x}}^{(i^*)})^{\perp} {\mathrm {grFrob}}(S_{n,k}; q) &=
{\mathrm {Hilb}}(\epsilon_{i,j} V_{n,k,j}^r; q) \cdot {\mathrm {grFrob}}(S_{n-j,k,k-r}^r; q) \\
&= q^{j \cdot (r-i) + r \cdot {j \choose 2}} {k \brack j}_{q^r} \cdot {\mathrm {grFrob}}(S_{n-j,k,k-r}^r; q).
\end{align}
It we want ${\mathrm {grFrob}}(S_{n,k}; q)$ to satisfy the same recursion that $\bm{D_{n,k}}({\mathbf {x}};q)$ satisfies
from Lemma~\ref{d-under-e-perp}, our goal is therefore
\begin{lemma}
\label{target-lemma}
\begin{equation}
\label{target-equation}
{\mathrm {grFrob}}(S_{n-j,k,k-j};q) = \sum_{m = \max(1,k-j)}^{\min(k,n-j)}
q^{r \cdot (k-m) \cdot (n-j-m)} {j \brack k-m}_{q^r} {\mathrm {grFrob}}(S_{n-j,m}; q).
\end{equation}
\end{lemma}
\begin{proof}
This is proven using the same reasoning as in the proofs of \cite[Lem. 6.9, Lem. 6.10]{HRS};
one just makes the change of variables $(x_1, \dots, x_n) \mapsto (x_1^r, \dots, x_n^r)$
and $q \mapsto q^r$.
\end{proof}
| 3,848 | 83,708 |
en
|
train
|
0.173.27
|
By Lemma~\ref{tensor-isomorphism}, we have
\begin{align}
e_j({\mathbf {x}}^{(i^*)})^{\perp} {\mathrm {grFrob}}(S_{n,k}; q) &=
{\mathrm {Hilb}}(\epsilon_{i,j} V_{n,k,j}^r; q) \cdot {\mathrm {grFrob}}(S_{n-j,k,k-r}^r; q) \\
&= q^{j \cdot (r-i) + r \cdot {j \choose 2}} {k \brack j}_{q^r} \cdot {\mathrm {grFrob}}(S_{n-j,k,k-r}^r; q).
\end{align}
It we want ${\mathrm {grFrob}}(S_{n,k}; q)$ to satisfy the same recursion that $\bm{D_{n,k}}({\mathbf {x}};q)$ satisfies
from Lemma~\ref{d-under-e-perp}, our goal is therefore
\begin{lemma}
\label{target-lemma}
\begin{equation}
\label{target-equation}
{\mathrm {grFrob}}(S_{n-j,k,k-j};q) = \sum_{m = \max(1,k-j)}^{\min(k,n-j)}
q^{r \cdot (k-m) \cdot (n-j-m)} {j \brack k-m}_{q^r} {\mathrm {grFrob}}(S_{n-j,m}; q).
\end{equation}
\end{lemma}
\begin{proof}
This is proven using the same reasoning as in the proofs of \cite[Lem. 6.9, Lem. 6.10]{HRS};
one just makes the change of variables $(x_1, \dots, x_n) \mapsto (x_1^r, \dots, x_n^r)$
and $q \mapsto q^r$.
\end{proof}
We are ready to describe the graded isomorphism types of $S_{n,k}$ and $R_{n,k}$.
\begin{theorem}
\label{graded-isomorphism-type}
Let $n, k,$ and $r$ be positive integers with $n \geq k$ and $r \geq 2$.
We have
\begin{equation}
{\mathrm {grFrob}}(S_{n,k}; q) = \bm{D_{n,k}}({\mathbf {x}}; q)
\end{equation}
and
\begin{equation}
{\mathrm {grFrob}}(R_{n,k}; q) = \sum_{z = 0}^{n-k} q^{krz} \cdot \bm{s}_{\varnothing, \dots, \varnothing, (z)}({\mathbf {x}}) \cdot
\bm{D_{n-z,k}}({\mathbf {x}}; q).
\end{equation}
\end{theorem}
When $k = n$, the graded Frobenius image of $R_{n,n} = S_{n,n}$ was calculated by
Stembridge \cite{Stembridge}.
\begin{proof}
By Lemma~\ref{target-lemma} (and the discussion preceding it), Lemma~\ref{d-under-e-perp},
and induction, we see that
\begin{equation}
e_j({\mathbf {x}}^{(i^*)})^{\perp} {\mathrm {grFrob}}(S_{n,k}; q) =
e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{D_{n,k}}({\mathbf {x}}; q)
\end{equation}
for all $j \geq 1$ and $1 \leq i \leq r$. Lemma~\ref{e-perp-lemma} therefore gives the first statement.
The second statement is a consequence of Proposition~\ref{r-to-s-reduction}.
\end{proof}
\begin{example}
Theorem~\ref{graded-isomorphism-type} may be verified directly in the case $n = k = 1$. We have
$S_{1,1} = R_{1,1} = {\mathbb {C}}[x_1]/\langle x_1^r \rangle$. The group $G_1 \cong G = \langle \zeta \rangle$ acts on
$S_{1,1}$ by $\zeta.x_1^i = \zeta^{-i} x_1^i$ for $0 \leq i < r$. Recalling our convention for the characters of the
cyclic group $G$, we have
\begin{equation}
{\mathrm {grFrob}}(S_{1,1}; q) = \bm{s}_{\varnothing, \dots, \varnothing, (1)} \cdot q^0 + \cdots
+ \bm{s}_{\varnothing, (1), \dots, \varnothing} \cdot q^{r-2} +
\bm{s}_{(1), \varnothing, \dots, \varnothing} \cdot q^{r-1}.
\end{equation}
On the other hand, the elements of ${\mathrm {SYT}}^r(1)$ are the tableaux
\begin{equation*}
(\varnothing, \varnothing, \dots, \, \,
\begin{Young}
1
\end{Young} \,), \, \, \dots, \, \,
(\varnothing, \begin{Young} 1 \end{Young} \, , \dots \, \varnothing),
(\begin{Young} 1 \end{Young} \, , \varnothing, \dots, \varnothing).
\end{equation*}
The major indices of these tableaux are (from left to right) $r-1, \dots, 1, 0$. By Proposition~\ref{d-schur-expansion}
we have
\begin{equation}
\bm{D_{1,1}}({\mathbf {x}};q) = {\mathrm {rev}}_q \left[ \bm{s}_{\varnothing, \dots, \varnothing, (1)} \cdot q^{r-1} + \cdots
+ \bm{s}_{\varnothing, (1), \dots, \varnothing} \cdot q^{1} +
\bm{s}_{(1), \varnothing, \dots, \varnothing} \cdot q^{0} \right],
\end{equation}
which agrees with Theorem~\ref{graded-isomorphism-type}.
\end{example}
\begin{example}
Let us consider Theorem~\ref{graded-isomorphism-type} in the case $(n,k,r) = (3,2,2)$.
By Proposition~\ref{d-schur-expansion}, the only elements of ${\mathrm {SYT}}^2(3)$ which contribute to
$\bm{D_{3,2}}({\mathbf {x}};q)$ are those with $\geq 1$ descent.
\begin{small}
\begin{equation*}
\begin{Young}
1 \\ 2 \\ 3 \\ \end{Young} \, , \, \varnothing \hspace{0.3in} \begin{Young} 1 & 2 \\ 3 \end{Young} \, , \, \varnothing
\hspace{0.3in}
\begin{Young} 1 & 3 \\ 2 \end{Young} \, , \, \varnothing \hspace{0.3in}
\begin{Young} 1 \\ 2 \end{Young} \, , \, \begin{Young} 3 \end{Young} \hspace{0.3in}
\begin{Young} 1 & 2 \end{Young} \, , \, \begin{Young} 3 \end{Young} \hspace{0.3in}
\begin{Young} 1 \\ 3 \end{Young} \, , \, \begin{Young} 2 \end{Young} \hspace{0.3in}
\begin{Young} 1 & 3 \end{Young} \, , \, \begin{Young} 2 \end{Young} \hspace{0.3in}
\begin{Young} 2 \\ 3 \end{Young} \, , \, \begin{Young} 1 \end{Young}
\end{equation*}
\begin{equation*}
\begin{Young} 1 \end{Young} \, , \, \begin{Young} 2 \\ 3 \end{Young} \hspace{0.3in}
\begin{Young} 1 \end{Young} \, , \, \begin{Young} 2 & 3 \end{Young} \hspace{0.3in}
\begin{Young} 2 \end{Young} \, , \, \begin{Young} 1 \\ 3 \end{Young} \hspace{0.3in}
\begin{Young} 2 \end{Young} \, , \, \begin{Young} 1 & 3 \end{Young} \hspace{0.3in}
\begin{Young} 3 \end{Young} \, , \, \begin{Young} 1 \\ 2 \end{Young} \hspace{0.3in}
\varnothing \, , \, \begin{Young} 1 & 2 \\ 3 \end{Young} \hspace{0.3in}
\varnothing \, , \, \begin{Young} 1 & 3 \\ 2 \end{Young} \hspace{0.3in}
\varnothing \, , \, \begin{Young} 1 \\ 2 \\ 3 \end{Young}
\end{equation*}
\end{small}
The major indices of these tableaux are (in matrix format)
$\begin{pmatrix}
6 & 4 & 2 & 7 & 5 & 3 & 3 & 5 \\
8 & 4 & 6 & 6 & 4 & 7 & 5 & 9
\end{pmatrix}$ while the descent numbers are
$\begin{pmatrix}
2 & 1 & 1 & 2 & 1 & 1 & 1 & 1 \\
2 & 1 & 1 & 1 & 1 & 1 & 1 & 2
\end{pmatrix}$. The statistic ${\mathrm {maj}}({ \bm{T}}) + r {n-k \choose 2} - r(n-k) {\mathrm {des}}({ \bm{T}})$ appearing in the exponent
in Proposition~\ref{d-schur-expansion} is therefore
$
\begin{pmatrix}
2 & 2 & 0 & 3 & 3 & 1 & 1 & 3 \\
4 & 2 & 4 & 4 & 2 & 5 & 3 & 5
\end{pmatrix}.
$
If we apply $\omega$ and multiply by ${{\mathrm {des}}({ \bm{T}}) \brack n-k}_{q^r} = [{\mathrm {des}}({ \bm{T}})]_{q^2}$, we see that
$\bm{D_{3,2}}({\mathbf {x}};q)$ is the $q$-reversal of
\begin{multline}
\bm{s}_{(3), \varnothing} \cdot (q^2 + q^4) + \bm{s}_{(2,1), \varnothing} \cdot q^2 +
\bm{s}_{(2,1), \varnothing} \cdot q^0 + \bm{s}_{(2), (1)} \cdot (q^3 + q^5) \\
+ \bm{s}_{(1,1), (1)} \cdot q^3 + \bm{s}_{(2), (1)} \cdot q^1 +
\bm{s}_{(1,1), (1)} \cdot q^1 + \bm{s}_{(2), (1)} \cdot q^3 \\
+ \bm{s}_{(1), (2)} \cdot (q^4 + q^6) + \bm{s}_{(1), (1,1)} \cdot q^2 +
\bm{s}_{(1), (2)} \cdot q^4 + \bm{s}_{(1), (1,1)} \cdot q^4 \\
+ \bm{s}_{(1), (2)} \cdot q^2 + \bm{s}_{\varnothing, (2,1)} \cdot q^5 +
\bm{s}_{\varnothing, (2,1)} \cdot q^3 + \bm{s}_{\varnothing, (3)} \cdot (q^5 + q^7).
\end{multline}
Collecting powers of $q$ and applying ${\mathrm {rev}}_q$, the graded Frobenius image ${\mathrm {grFrob}}(S_{3,2}; q)$ is
\begin{multline}
\label{small-expression}
\bm{s}_{\varnothing, (3)} \cdot q^0 + \bm{s}_{(1), (2)} \cdot q^1
+ (\bm{s}_{(2), (1)} + \bm{s}_{\varnothing, (2,1)} + \bm{s}_{\varnothing, (3)}) \cdot q^2 \\
+ (\bm{s}_{(3), \varnothing} + 2 \bm{s}_{(1), (2)} + \bm{s}_{(1), (1,1)})
\cdot q^3 +
(2 \bm{s}_{(2), (1)} + \bm{s}_{(1,1), (1)} + \bm{s}_{\varnothing, (2,1)}) \cdot q^4 \\
+ (\bm{s}_{(3), \varnothing} + \bm{s}_{(2,1), \varnothing} + \bm{s}_{(1), (1,1)} + \bm{s}_{(1), (2)})
\cdot q^5 +
(\bm{s}_{(2), (1)} + \bm{s}_{(1,1), (1)}) \cdot q^6
+ \bm{s}_{(2,1), \varnothing} \cdot q^7.
\end{multline}
Let us calculate ${\mathrm {grFrob}}(R_{3,2}; q)$.
A shorter calculation (left to the reader) shows that $\bm{D_{2,2}}({\mathbf {x}}; q)$ is given by
\begin{equation}
\label{new-expression}
\bm{s}_{\varnothing, (2)} \cdot q^0 + \bm{s}_{(1), (1)} \cdot q^1
+ (\bm{s}_{(2), \varnothing} + \bm{s}_{\varnothing, (1,1)}) \cdot q^2 +
\bm{s}_{(1), (1)} \cdot q^3 + \bm{s}_{(1,1), \varnothing} \cdot q^4.
\end{equation}
By Theorem~\ref{graded-isomorphism-type}, the Frobenius image ${\mathrm {grFrob}}(R_{3,2}; q)$ is given
by adding the product of (\ref{new-expression}) and $\bm{s}_{(\varnothing, (1))}({\mathbf {x}}) \cdot q^4$ to
(\ref{small-expression}). Applying the Pieri rule we see that the Schur expansion of
${\mathrm {grFrob}}(R_{3,2}; q)$ is
\begin{multline}
\text{\rm{(expression in (}\ref{small-expression}))} \, +
(\bm{s}_{\varnothing, (3)} + \bm{s}_{\varnothing, (2,1)}) \cdot q^4 +
(\bm{s}_{(1), (2)} + \bm{s}_{(1), (1,1)}) \cdot q^5 \\
+ (\bm{s}_{(2), (1)} + \bm{s}_{\varnothing, (2,1)} + \bm{s}_{\varnothing, (1,1,1)}) \cdot q^6 +
(\bm{s}_{(1), (2)} + \bm{s}_{(1), (1,1)}) \cdot q^7
+ \bm{s}_{(1,1), (1)} \cdot q^8.
\end{multline}
\end{example}
| 3,921 | 83,708 |
en
|
train
|
0.173.28
|
\section{Conclusion}
\label{Conclusion}
In this paper we introduced a quotient $R_{n,k}$ of the polynomial ring ${\mathbb {C}}[{\mathbf {x}}_n]$ whose structure
is governed by the combinatorics of the set of $k$-dimensional faces ${\mathcal{F}}_{n,k}$ in the Coxeter complex
attached to $G_n$, where $G_n = {\mathbb {Z}}_r \wr {\mathfrak{S}}_n$ is a wreath product.
\begin{problem}
\label{reflection-group-generalization}
Let $W \subset GL_n({\mathbb {C}})$ be a complex reflection group and let $0 \leq k \leq n$. Find a graded $W$-module
$R_{W,k}$ which generalizes $R_{n,k}$.
\end{problem}
The quotient $R_{W,k}$ in Problem~\ref{reflection-group-generalization} should have combinatorics governed
by the $k$-dimensional faces ${\mathcal{F}}_{W,k}$ of some Coxeter complex-like object attached to $W$.
A natural collection of groups $W$ to look at is the $G(r,p,n)$ family of reflection groups. Recall that, for positive
integers $r, p, n$ with $p \mid r$, the group $G(r,p,n)$ is defined by
\begin{equation}
G(r,p,n) := \{ g \in G_n \,:\, \text{the product of the nonzero entries in $g$ is a $(r/p)^{th}$ root of unity} \}.
\end{equation}
It is well known that the $G(r,p,n)$-invariant polynomials ${\mathbb {C}}[{\mathbf {x}}_n]^{G(r,p,n)}$ have algebraically independent
generators $e_1({\mathbf {x}}_n^r), e_2({\mathbf {x}}_n^r), \dots, e_{n-1}({\mathbf {x}}_n^r),$ and $(x_1 \cdots x_n)^{r/p}$.
However, even in the case of $G(2,2,n)$ which is isomorphic to the real reflection group of type $D_n$,
the authors have been unable to construct a quotient of ${\mathbb {C}}[{\mathbf {x}}_n]$ which carries an action of
$G(2,2,n)$ whose dimension is given by the number of $k$-dimensional faces in the $D_n$-Coxeter complex.
If $W$ is any {\em real} reflection group and $\mathbb{F}$ is any field, there is an $\mathbb{F}$-algebra
$H_W(0)$ of dimension $|W|$ called the {\em 0-Hecke algebra} attached to $W$.
When $W$ is the symmetric group ${\mathfrak{S}}_n$,
there is an action of $H_W(0)$ on the polynomial ring $\mathbb{F}[{\mathbf {x}}_n]$ given by the
isobaric Demazure operators (see \cite{HuangRhoades}). When $W = {\mathfrak{S}}_n$, Huang and Rhoades
proved that the ideal
\begin{equation}
\langle h_k(x_1), h_k(x_1, x_2), \dots, h_k(x_1, x_2, \dots, x_n), e_n({\mathbf {x}}_n), e_{n-1}({\mathbf {x}}_n), \dots, e_{n-k+1}({\mathbf {x}}_n) \rangle
\subseteq \mathbb{F}[{\mathbf {x}}_n]
\end{equation}
is stable under this action, and that the corresponding quotient of $\mathbb{F}[{\mathbf {x}}_n]$ gives a graded version of
a natural action of $H_{{\mathfrak{S}}_n}(0)$ on $k$-block ordered set partitions of $[n]$.
This suggests the following problem.
\begin{problem}
\label{zero-hecke-problem}
Let $W$ be a real reflection group of rank $n$, let $H_W(0)$ be the 0-Hecke algebra attached to $W$, and let
$0 \leq k \leq n$. Describe a natural action of $W$ on the set of $k$-dimensional faces in the Coxeter complex of $W$.
Give a graded this action as a $W$-stable quotient of $\mathbb{F}[{\mathbf {x}}_n]$.
\end{problem}
Another possible direction for future research is motivated by the Delta Conjecture and the {\em Parking Conjecture}
of Armstrong, Reiner, and Rhoades \cite{ARR}. Let $W$ be an irreducible real reflection group with reflection
representation $V$ and Coxeter number $h$, and consider a homogeneous system of parameters
$\theta_1, \dots, \theta_n \in {\mathbb {C}}[V]_{h+1}$ of degree $h+1$ carrying the dual $V^*$ of the reflection
representation. Armstrong et. al. introduce an inhomogeneous deformation $(\Theta - {\mathbf {x}})$ of the ideal
$(\Theta) = (\theta_1, \dots, \theta_n) \subseteq {\mathbb {C}}[V]$ generated by the $\theta_i$ and conjecture a
relationship between the quotient ${\mathbb {C}}[V]/(\Theta - {\mathbf {x}})$ and $(W \times {\mathbb {Z}}_h)$-set $\mathsf{Park}^{NC}_W$
of `$W$-noncrossing parking functions' defined via Coxeter-Catalan theory.
When $W = {\mathfrak{S}}_n$ is the symmetric group,
the `classical' h.s.o.p. quotient ${\mathbb {C}}[V]/(\Theta)$ is known to have graded Frobenius image given by
(the image under $\omega$ of, after a $q$-shift) the Delta conjecture in the case $k = n$ at the specialization $t = 1/q$.
In \cite[Prob. 7.8]{HRS} the problem was posed of finding a `$k \leq n$' extension of the Parking Conjecture
for any real reflection group $W$. The authors are hopeful that the quotients studied in this paper
will be helpful in this endeavor.
\end{document}
| 1,531 | 83,708 |
en
|
train
|
0.174.0
|
\ensuremath{\mathbf{b}}egin{document}
\ensuremath{\mathbf{m}}aketitle
\ensuremath{\mathbf{t}}hispagestyle{empty}
\ensuremath{\mathbf{p}}agestyle{empty}
\ensuremath{\mathbf{b}}egin{abstract}
Deep CCA is a recently proposed deep neural network extension to the traditional canonical correlation analysis (CCA), and has been
successful for multi-view representation learning in several domains. However, stochastic optimization of the deep CCA objective is not straightforward, because it does not decouple over training examples. Previous optimizers for deep CCA are either
batch-based algorithms or stochastic optimization using large minibatches, which can have high memory consumption. In this paper, we tackle the problem of stochastic optimization for deep CCA with small minibatches, based on an iterative solution to the CCA objective, and show that we can achieve as good performance as previous optimizers and thus alleviate the memory requirement.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{abstract}
\section{Introduction}
\label{s:intro}
Stochastic gradient descent (SGD) is a fundamental and popular optimization method for machine learning problems~\ensuremath{\mathbf{c}}ite{Bottou91a,Lecun_98b,Bottou04a,Zhang04b,Bertsek11a}. SGD is particularly well-suited for large-scale machine learning problems because it is extremely simple and easy to implement, it often achieves better generalization (test) performance (which is the focus of machine learning research) than sophisticated batch algorithms, and it usually achieves large error reduction very quickly in a small number of passes over the training set~\ensuremath{\mathbf{c}}ite{BottouBousquet08a}. One intuitive explanation for the empirical success of stochastic gradient descent for large data is that it makes better use of data redundancy, with an extreme example given by \ensuremath{\mathbf{c}}ite{Lecun_98b}: If the training set consists of $10$ copies of the same set of examples, then computing an estimate of the gradient over one single copy is $10$ times more efficient than computing the full gradient over the entire training set, while achieving the same optimization progress in the following gradient descent step.
At the same time, ``multi-view'' data are becoming increasingly available, and methods based on canonical correlation analysis (CCA)~\ensuremath{\mathbf{c}}ite{Hotell36a} that use such data to learn representations (features) form an active research area. The views can be multiple measurement modalities, such as simultaneously recorded audio + video~\ensuremath{\mathbf{c}}ite{Kidron_05a,Chaudh_09a}, audio + articulation~\ensuremath{\mathbf{c}}ite{AroraLivesc13a}, images + text~\ensuremath{\mathbf{c}}ite{Hardoon_04a,SocherLi10a,Hodosh_13a}, or parallel text in two languages~\ensuremath{\mathbf{c}}ite{Vinokour_03a,Haghig_08a,Chandar_14a,FaruquiDyer14a,Lu_15a}, but may also be different information extracted from the same source, such as words + context~\ensuremath{\mathbf{c}}ite{Pennin_14a} or document text + text of inbound hyperlinks~\ensuremath{\mathbf{c}}ite{BickelScheff04a}. The presence of multiple information sources presents an opportunity to learn better representations (features) by analyzing multiple views simultaneously. Among various multi-view learning approaches, the recently proposed deep canonical correlation analysis \ensuremath{\mathbf{c}}ite{Andrew_13a}, which extends traditional CCA with deep neural networks (DNNs), has been shown to be advantageous over previous methods in several domains \ensuremath{\mathbf{c}}ite{Wang_15a,Wang_15b,YanMikolaj15a}, and scales to large data better than its nonparametric counterpart kernel CCA~\ensuremath{\mathbf{c}}ite{LaiFyfe00a,BachJordan02a,Hardoon_04a}.
In contrast with most DNN-based methods, the objective of deep CCA couples together all of the training examples due to its whitening constraint, making stochastic optimization challenging. Previous optimizers for this model are
batch-based, e.g., limited-memory BFGS (L-BFGS) \ensuremath{\mathbf{c}}ite{Nocedal80a} as in \ensuremath{\mathbf{c}}ite{Andrew_13a}, or stochastic optimization with large minibatches~\ensuremath{\mathbf{c}}ite{Wang_15a}, because it is difficult to obtain an accurate estimate of the gradient with a small subset of the training examples (again due to the whitening constraint). As a result, these approaches have high memory complexity and may not be practical for large DNN models with hundreds of millions of weight parameters (common with web-scale data~\ensuremath{\mathbf{c}}ite{Dean_12a}), or if one would like to run the training procedure on GPUs which are equipped with faster but smaller (more expensive) memory than CPUs. In such cases there is not enough memory to save all intermediate hidden activations of the batch/large minibatch used in error backpropagation.
In this paper, we tackle this problem with two key ideas. First, we reformulate the CCA solution with orthogonal iterations, and embed the DNN parameter training in the orthogonal iterations with a nonlinear least squares regression objective, which naturally decouples over training examples. Second, we use adaptive estimates of the covariances used by the CCA whitening constraints and carry out whitening \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}mph{only} for the minibatch used at each step to obtain training signals for the DNNs. This results in a stochastic optimization algorithm that can operate on small minibatches and thus consume little memory. Empirically, the new stochastic optimization algorithm performs as well as previous optimizers in terms of convergence speed, even when using small minibatches with which the previous stochastic approach makes no training progress.
In the following sections, we briefly introduce deep CCA and discuss the difficulties in training it (Section~\ref{s:dcca}); motivate and propose our new algorithm (Section~\ref{s:algorithm}); describe related work (Section~\ref{s:related}); and present experimental results comparing different optimizers (Section~\ref{s:experiments}).
| 1,523 | 30,864 |
en
|
train
|
0.174.1
|
\section{Deep CCA}
\label{s:dcca}
\ensuremath{\mathbf{n}}oindent\ensuremath{\mathbf{t}}extbf{Notation} In the multi-view feature learning setting, we have access to paired observations from two views, denoted ${(\ensuremath{\mathbf{x}}_1,\ensuremath{\mathbf{y}}_1),\dots,(\ensuremath{\mathbf{x}}_N,\ensuremath{\mathbf{y}}_N)}$, where $N$ is the training set size, $\ensuremath{\mathbf{x}}_i\in \ensuremath{\mathbb{R}}^{D_x}$ and $\ensuremath{\mathbf{y}}_i\in \ensuremath{\mathbb{R}}^{D_y}$ for $i=1,\dots,N$. We also denote the data matrices for View 1 and View 2 $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}=[\ensuremath{\mathbf{x}}_1,\dots,\ensuremath{\mathbf{x}}_N]$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}=[\ensuremath{\mathbf{y}}_1,\dots,\ensuremath{\mathbf{y}}_N]$, respectively. We use bold-face letters, e.g.~$\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$, to denote mappings implemented by DNNs, with a corresponding set of learnable parameters, denoted, e.g., $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$. The dimensionality of the
learned features is denoted $L$.
\ensuremath{\mathbf{b}}egin{figure}
\ensuremath{\mathbf{c}}entering
\ensuremath{\mathbf{p}}sfrag{x}[][]{$\ensuremath{\mathbf{x}}$}
\ensuremath{\mathbf{p}}sfrag{y}[][]{$\ensuremath{\mathbf{y}}$}
\ensuremath{\mathbf{p}}sfrag{v1}[][][0.8]{View 1}
\ensuremath{\mathbf{p}}sfrag{v2}[][][0.8][90]{View 2}
\ensuremath{\mathbf{p}}sfrag{U}[][]{$\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}$}
\ensuremath{\mathbf{p}}sfrag{V}[][]{$\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}$}
\ensuremath{\mathbf{p}}sfrag{f}[][]{$\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$}
\ensuremath{\mathbf{p}}sfrag{g}[][]{$\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}$}
\includegraphics[width=0.55\linewidth]{dcca.eps}
\ensuremath{\mathbf{c}}aption{Schematic diagram of deep canonical correlation analysis.}
\label{f:dcca}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{figure}
Deep CCA (DCCA)~\ensuremath{\mathbf{c}}ite{Andrew_13a} extends (linear) CCA~\ensuremath{\mathbf{c}}ite{Hotell36a} by extracting $d_x$- and $d_y$-dimensional nonlinear features with two DNNs $\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}$ for views 1 and 2 respectively, such that the canonical correlation (measured by CCA) between the DNN outputs is maximized, as illustrated in Fig.~\ref{f:dcca}. The goal of the final CCA is to find $L \le \ensuremath{\mathbf{m}}in(d_x,d_y)$ pairs of linear projection vectors $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}} \in \ensuremath{\mathbb{R}}^{d_x \ensuremath{\mathbf{t}}imes L} $ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}} \in \ensuremath{\mathbb{R}}^{d_y \ensuremath{\mathbf{t}}imes L}$ such that the projections of each view (a.k.a.~canonical variables,~\ensuremath{\mathbf{c}}ite{Hotell36a}) are maximally correlated with their counterparts in the other view, constrained such that the dimensions in the representation are uncorrelated with each other.
Formally, the DCCA objective can be written as\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}ootnote{In this paper, we use the scaled covariance matrices (scaled by $N$) so that the dimensions of the projection are orthonormal and comply with the custom of orthogonal iterations.}
\ensuremath{\mathbf{b}}egin{gather} \label{e:dcca}
\ensuremath{\mathbf{m}}ax_{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}} \ensuremath{\mathbf{q}}uad \ensuremath{\mathbf{t}}race{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}} \\
\ensuremath{\mathbf{t}}ext{s.t.} \ensuremath{\mathbf{q}}uad \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}} = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}} = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{I}}, \ensuremath{\mathbf{n}}onumber
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather}
where $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}=\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}})=[\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}(\ensuremath{\mathbf{x}}_1),\dots,\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}(\ensuremath{\mathbf{x}}_N)] \in \ensuremath{\mathbb{R}}^{d_x \ensuremath{\mathbf{t}}imes N}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}=\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}})=[\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}(\ensuremath{\mathbf{y}}_1),\dots,\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}(\ensuremath{\mathbf{y}}_N)] \in \ensuremath{\mathbb{R}}^{d_y \ensuremath{\mathbf{t}}imes N}$. We assume that $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}$ are centered at the origin for notational simplicity; if they are not, we can center them as a pre-processing operation. Notice that if we use the original input data without further feature extraction, i.e.~$\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}$, then we recover the CCA objective.
In DCCA, the final features (projections) are
\ensuremath{\mathbf{b}}egin{gather}\label{e:concat}
\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\mathbf{x}})=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}(\ensuremath{\mathbf{x}}) \ensuremath{\mathbf{q}}quad \ensuremath{\mathbf{t}}ext{and} \ensuremath{\mathbf{q}}quad \ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\mathbf{y}})=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}(\ensuremath{\mathbf{y}}).
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather}
We observe that the last CCA step with linear projection mappings $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}$ can be considered as adding a linear layer on top of the feature extraction networks $\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}$ respectively. In the following, we sometimes refer to the concatenated networks $\ensuremath{\mathbf{t}}ilde{\f}$ and $\ensuremath{\mathbf{t}}ilde{\g}$ as defined in \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:concat}, with $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}=\{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}\}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}}=\{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}\}$.
\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}ootnote{In principle there is no need for the final linear layer; we could define DCCA such that the correlation objective and constraints are imposed on the final nonlinear layer. However, the linearity of the final layer is crucial for algorithmic implementations such as ours.}
| 3,058 | 30,864 |
en
|
train
|
0.174.2
|
\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}ootnote{In principle there is no need for the final linear layer; we could define DCCA such that the correlation objective and constraints are imposed on the final nonlinear layer. However, the linearity of the final layer is crucial for algorithmic implementations such as ours.}
Let $\ensuremath{\mathbf{b}}Sigma_{fg}= \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op$, $\ensuremath{\mathbf{b}}Sigma_{ff}=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op$ and $\ensuremath{\mathbf{b}}Sigma_{gg}=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op$ be the (scaled) cross- and auto-covariance matrices of the feature-mapped data in the two views. It is well-known that, when $\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}$ are fixed, the last CCA step in \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca} has a closed form solution as follows. Define $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg=\ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\mathbf{b}}Sigma_{fg} \ensuremath{\mathbf{b}}Sigma_{gg}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}}$, and let $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg=\ensuremath{\mathbf{t}}ilde{\U} \Lambda \ensuremath{\mathbf{t}}ilde{\V}^\ensuremath{\mathbf{t}}op $ be its rank-L singular value decomposition (SVD), where $\Lambda$ contains the singular values $\sigma_1 \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \dots \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \sigma_L \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e 0$ on its diagonal. Then the optimum of \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca} is achieved by $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}})=(\ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\mathbf{t}}ilde{\U}, \ensuremath{\mathbf{b}}Sigma_{gg}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\mathbf{t}}ilde{\V} )$, and the optimal objective value (the total canonical correlation) is $\sum_{j=1}^L \sigma_j$. By switching $\ensuremath{\mathbf{m}}ax(\ensuremath{\mathbf{c}}dot)$ with $- \ensuremath{\mathbf{m}}in -(\ensuremath{\mathbf{c}}dot)$, and adding $1/2$ times the constraints, it is straightforward to show that \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca} is equivalent to the following:
\ensuremath{\mathbf{b}}egin{gather}\label{e:dcca2}
\ensuremath{\mathbf{m}}in_{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}} \ensuremath{\mathbf{q}}uad \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2} \ensuremath{\mathbf{n}}orm{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} - \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}}^2_F \\
\ensuremath{\mathbf{q}}quad \ensuremath{\mathbf{t}}ext{s.t.} \ensuremath{\mathbf{q}}uad (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}) (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}})^\ensuremath{\mathbf{t}}op = (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}) (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}})^\ensuremath{\mathbf{t}}op = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{I}}. \ensuremath{\mathbf{n}}onumber
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather}
In other words, CCA minimizes the squared difference between the projections of the two views, subject to the whitening constraints. This alternative formulation of CCA will also shed light on our proposed algorithm for DCCA.
The DCCA objective \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca} differs from typical DNN regression or classification training objectives. Typically, the objectives are unconstrained and can be written as the expectation (or sum) of error functions (e.g., squared loss or cross entropy) incurred at each training example. This property naturally suggests stochastic gradient descent (SGD) for optimization, where one iteratively generates random unbiased estimates of the gradient based on one or a few training examples (a minibatch) and takes a small step in the opposite direction. However, the objective in \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca} can not be written as an unconstrained sum of errors. The difficulty lies in the fact that the training examples are coupled through the auto-covariance matrices (in the constraints), which can not be reliably estimated with only a small amount of data.
When introducing deep CCA, \ensuremath{\mathbf{c}}ite{Andrew_13a} used the L-BFGS algorithm for optimization. To compute the gradients of the objective with respect to $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}})$, one first computes the gradients\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}ootnote{Technically we are computing subgradients as the ``sum of singular values'' (trace norm) is not a differentiable function of the matrix.} with respect to $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}})$ as
\ensuremath{\mathbf{b}}egin{align}\label{e:gradient}
\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{\ensuremath{\mathbf{p}}artial \sum_{j=1}^L \sigma_j} {\ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}} &= 2\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{D}}elta_{ff} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} + \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{D}}elta_{fg} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}, \\
\ensuremath{\mathbf{t}}ext{with}\ensuremath{\mathbf{q}}quad \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{D}}elta_{ff} & = -\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2} \ensuremath{\ensuremath{\mathbf{b}}oldsymbol{\Sigma}}_{ff}^{-1/2} \ensuremath{\mathbf{t}}ilde{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}} \Lambda \ensuremath{\mathbf{t}}ilde{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{b}}oldsymbol{\Sigma}}_{ff}^{-1/2} \ensuremath{\mathbf{n}}onumber \\
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{D}}elta_{fg} & = \ensuremath{\ensuremath{\mathbf{b}}oldsymbol{\Sigma}}_{ff}^{-1/2} \ensuremath{\mathbf{t}}ilde{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}} \ensuremath{\mathbf{t}}ilde{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{b}}oldsymbol{\Sigma}}_{gg}^{-1/2} \ensuremath{\mathbf{n}}onumber
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{align}
where $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg=\ensuremath{\mathbf{t}}ilde{\U}\Lambda\ensuremath{\mathbf{t}}ilde{\V}^\ensuremath{\mathbf{t}}op$ is the SVD of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$ as in the closed-form solution to CCA, and $\ensuremath{\mathbf{p}}artial \sum_{j=1}^L \sigma_j / \ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}$ has an analogous expression. One can then compute the gradients with respect to $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}$ via the standard backpropagation procedure~\ensuremath{\mathbf{c}}ite{Rumelh_86c}. From the gradient formulas, it is clear that the key to optimizing DCCA is the SVD of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$; various nonlinear optimization techniques can be used here once the gradient is computed. In practice, however, batch optimization is undesirable for applications with large training sets or large DNN architectures, as each gradient step computed on the entire training set can be
expensive in both memory and time.
Later, it was observed by \ensuremath{\mathbf{c}}ite{Wang_15a} that stochastic optimization still works well even for the DCCA objective, as long as larger minibatches are used to estimate the covariances and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$ when computing the gradient with \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:gradient}. More precisely, the authors find that learning plateaus at a poor objective value if the minibatch is too small, but fast convergence and better generalization than batch algorithms can be obtained once the minibatch size is larger than some threshold, presumably because a large minibatch contains enough information to estimate the covariances and therefore the gradient accurately enough (the threshold of minibatch size varies for different datasets because they have different levels of data redundancy).
Theoretically, the necessity of using large minibatches in this approach
can also be established. Let the empirical estimate of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$ using a minibatch of $n$ samples be $\ensuremath{\ensuremath{\mathbf{m}}athbf{h}}at{\ensuremath{\ensuremath{\mathbf{b}}oldsymbol{\Sigma}}}_{fg}^{(n)}$.
It can be shown that the expectation of $\ensuremath{\ensuremath{\mathbf{m}}athbf{h}}at{\ensuremath{\ensuremath{\mathbf{b}}oldsymbol{\Sigma}}}_{fg}^{(n)}$ does not equal the true $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$ computed using the entire dataset, mainly due to the nonlinearities in the matrix inversion and multiplication operations in computing $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$, and the nonlinearity in the ``sum of singular values''
(trace norm) of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$; moreover, the spectral norm of the error $\ensuremath{\mathbf{n}}orm{\ensuremath{\ensuremath{\mathbf{m}}athbf{h}}at{\ensuremath{\ensuremath{\mathbf{b}}oldsymbol{\Sigma}}}_{fg}^{(n)} - \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg}$ decays slowly
as $\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{\sqrt{n}}$. Consequently, the gradient estimated on a minibatch using \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:gradient} does not equal the true gradient of the objective in expectation, indicating that the stochastic approach of \ensuremath{\mathbf{c}}ite{Wang_15a} does not qualify as a stochastic gradient descent method for the DCCA objective.
| 3,932 | 30,864 |
en
|
train
|
0.174.3
|
\section{Our algorithm}
\label{s:algorithm}
| 14 | 30,864 |
en
|
train
|
0.174.4
|
\subsection{An iterative solution to linear CCA}
\ensuremath{\mathbf{b}}egin{algorithm}[t]
\ensuremath{\mathbf{c}}aption{CCA projections via alternating least squares.}
\label{alg:cca-iterative}
\renewcommand{\textbf{Input:}}{\ensuremath{\mathbf{t}}extbf{Input:}}
\renewcommand{\textbf{Output:}}{\ensuremath{\mathbf{t}}extbf{Output:}}
\ensuremath{\mathbf{b}}egin{algorithmic}
\REQUIRE Data matrices $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}\in \ensuremath{\mathbb{R}}^{d_x \ensuremath{\mathbf{t}}imes N}$, $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}\in \ensuremath{\mathbb{R}}^{d_y \ensuremath{\mathbf{t}}imes N}$. Initialization $\ensuremath{\mathbf{t}}ilde{\U}_0\in \ensuremath{\mathbb{R}}^{d_x\ensuremath{\mathbf{t}}imes L}$ s.t. $\ensuremath{\mathbf{t}}ilde{\U}_0^\ensuremath{\mathbf{t}}op \ensuremath{\mathbf{t}}ilde{\U}_0=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{I}}$.
\STATE $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_0 \leftarrow \ensuremath{\mathbf{t}}ilde{\U}_0^\ensuremath{\mathbf{t}}op \ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}$
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}OR{$t=1,2,\dots,T$}
\STATE $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_t \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}$
\STATE $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t} \leftarrow \left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}^\ensuremath{\mathbf{t}}op \right)^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_t$
\STATE $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}$
\STATE $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t} \leftarrow \left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}^\ensuremath{\mathbf{t}}op \right)^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{E}}NDFOR
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{E}}NSURE $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{T}$/$\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{T}$ are the CCA projections of view 1/2.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{algorithmic}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{algorithm}
Our solution to \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca} is inspired by the iterative solution for finding the linear CCA projections $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}})$ for inputs $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}})$, as shown in Algorithm~\ref{alg:cca-iterative}. This algorithm computes the top-$L$ singular vectors $(\ensuremath{\mathbf{t}}ilde{\U},\ensuremath{\mathbf{t}}ilde{\V})$ of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$ via orthogonal iterations \ensuremath{\mathbf{c}}ite{GolubLoan96a}. An essentially identical algorithm (named \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}mph{alternating least squares} for reasons that will soon become evident) appears in \ensuremath{\mathbf{c}}ite[Algorithm 5.2]{GolubZha95a} and according to the authors the idea goes back to J. Von Neumann. A similar algorithm is also recently used by \ensuremath{\mathbf{c}}ite[Algorithm~1]{LuFoster14a} for large scale linear CCA with high-dimensional sparse inputs, although their algorithm does not implement the whitening operations $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t} \leftarrow \left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}^\ensuremath{\mathbf{t}}op \right)^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t} \leftarrow \left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}^\ensuremath{\mathbf{t}}op \right)^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_t$ or they use the QR decomposition instead. The convergence of Algorithm~\ref{alg:cca-iterative} is characterized by the following theorem, which parallels \ensuremath{\mathbf{c}}ite[Theorem~1]{LuFoster14a}.
\ensuremath{\mathbf{b}}egin{theorem}
Let the singular values of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$ be
\ensuremath{\mathbf{b}}egin{gather*}
\sigma_1 \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \dots \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \sigma_L > \sigma_{L+1} \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \dots \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \sigma_{\ensuremath{\mathbf{m}}in(d_x,d_y)}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*}
and
suppose $\ensuremath{\mathbf{t}}ilde{\U}_0^\ensuremath{\mathbf{t}}op \ensuremath{\mathbf{t}}ilde{\U}$ is nonsingular. Then the output $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_T,\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_T)$ of Algorithm~\ref{alg:cca-iterative} converges to the CCA projections as $T\rightarrow \infty$.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{theorem}
\ensuremath{\mathbf{b}}egin{proof}
We focus on showing that $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_T$ converges to the view 1 projection; the proof for $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_T$ is similar.
First recall that $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg=\ensuremath{\mathbf{t}}ilde{\U} \Lambda \ensuremath{\mathbf{t}}ilde{\V}^\ensuremath{\mathbf{t}}op$ is the rank-$L$ SVD of $\ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\mathbf{b}}Sigma_{fg} \ensuremath{\mathbf{b}}Sigma_{gg}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}}$, and thus $\ensuremath{\mathbf{t}}ilde{\U}$ contains the top-$L$ eigenvectors of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg^\ensuremath{\mathbf{t}}op = \ensuremath{\mathbf{t}}ilde{\U} \Lambda^2 \ensuremath{\mathbf{t}}ilde{\U}^\ensuremath{\mathbf{t}}op$.
| 2,977 | 30,864 |
en
|
train
|
0.174.5
|
\ensuremath{\mathbf{b}}egin{theorem}
Let the singular values of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$ be
\ensuremath{\mathbf{b}}egin{gather*}
\sigma_1 \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \dots \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \sigma_L > \sigma_{L+1} \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \dots \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \sigma_{\ensuremath{\mathbf{m}}in(d_x,d_y)}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*}
and
suppose $\ensuremath{\mathbf{t}}ilde{\U}_0^\ensuremath{\mathbf{t}}op \ensuremath{\mathbf{t}}ilde{\U}$ is nonsingular. Then the output $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_T,\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_T)$ of Algorithm~\ref{alg:cca-iterative} converges to the CCA projections as $T\rightarrow \infty$.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{theorem}
\ensuremath{\mathbf{b}}egin{proof}
We focus on showing that $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_T$ converges to the view 1 projection; the proof for $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_T$ is similar.
First recall that $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg=\ensuremath{\mathbf{t}}ilde{\U} \Lambda \ensuremath{\mathbf{t}}ilde{\V}^\ensuremath{\mathbf{t}}op$ is the rank-$L$ SVD of $\ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\mathbf{b}}Sigma_{fg} \ensuremath{\mathbf{b}}Sigma_{gg}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}}$, and thus $\ensuremath{\mathbf{t}}ilde{\U}$ contains the top-$L$ eigenvectors of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg^\ensuremath{\mathbf{t}}op = \ensuremath{\mathbf{t}}ilde{\U} \Lambda^2 \ensuremath{\mathbf{t}}ilde{\U}^\ensuremath{\mathbf{t}}op$.
Since the operation $\left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}^\ensuremath{\mathbf{t}}op\right)^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}$ extracts an orthonormal basis of the row space of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}$, at iteration $t$ we can write
\ensuremath{\mathbf{b}}egin{align*}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} & = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{P}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_t\\
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} & = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Q}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{align*}
where $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{P}}_t \in \ensuremath{\mathbb{R}}^{L\ensuremath{\mathbf{t}}imes L}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Q}}_t \in \ensuremath{\mathbb{R}}^{L\ensuremath{\mathbf{t}}imes L}$ are nonsingular coefficient matrices (as the initialization $\ensuremath{\mathbf{t}}ilde{\U}_0$ is nonsingular) for representing the left-hand side matrices in their row space basis. Combining the above two equations gives the following recursion at iteration $t$:
\ensuremath{\mathbf{b}}egin{gather*}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{P}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Q}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*}
By induction, it can be shown that by the end of iteration $t$ we have
\ensuremath{\mathbf{b}}egin{multline*}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_0 \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \right)^t = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{O}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{multline*}
where $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{O}}_t=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{P}}_1 \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Q}}_1 \dots \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{P}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Q}}_t \in \ensuremath{\mathbb{R}}^{L\ensuremath{\mathbf{t}}imes L}$ is nonsingular.
Plugging in the definition of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_0$, this equation reduces to
\ensuremath{\mathbf{b}}egin{gather} \label{e:orth-iteration}
\ensuremath{\mathbf{t}}ilde{\U}_0^\ensuremath{\mathbf{t}}op \left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg^\ensuremath{\mathbf{t}}op \right)^t \ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{O}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather}
It is then clear that $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$ can be written as
\ensuremath{\mathbf{b}}egin{gather*}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t = \ensuremath{\mathbf{t}}ilde{\U}_t^\ensuremath{\mathbf{t}}op \ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*}
with
\ensuremath{\mathbf{b}}egin{gather*}
\ensuremath{\mathbf{t}}ilde{\U}_t = \left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg^\ensuremath{\mathbf{t}}op \right)^t \ensuremath{\mathbf{t}}ilde{\U}_0 \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{O}}_t^{-1} \; \in \ensuremath{\mathbb{R}}^{d_x\ensuremath{\mathbf{t}}imes L}.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*}
And since $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$ has orthonormal rows, we have
\ensuremath{\mathbf{b}}egin{gather*}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{I}}=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t^\ensuremath{\mathbf{t}}op = \ensuremath{\mathbf{t}}ilde{\U}_t^\ensuremath{\mathbf{t}}op \ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op) \ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\mathbf{t}}ilde{\U}_t = \ensuremath{\mathbf{t}}ilde{\U}_t^\ensuremath{\mathbf{t}}op \ensuremath{\mathbf{t}}ilde{\U}_t,
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*}
indicating that $\ensuremath{\mathbf{t}}ilde{\U}_t$ has orthonormal columns.
| 3,683 | 30,864 |
en
|
train
|
0.174.6
|
As a result, we consider the algorithm as working implicitly in the space of $\{ \ensuremath{\mathbf{t}}ilde{\U}_t\in \ensuremath{\mathbb{R}}^{d_x\ensuremath{\mathbf{t}}imes L}, t=0,\dots,T\}$, and have
\ensuremath{\mathbf{b}}egin{gather} \label{e:orth-iteration}
(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg^\ensuremath{\mathbf{t}}op)^T \ensuremath{\mathbf{t}}ilde{\U}_0 = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{O}}_T \ensuremath{\mathbf{t}}ilde{\U}_T.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather}
Following the argument of~\ensuremath{\mathbf{c}}ite[Theorem~8.2.2]{GolubLoan96a}) for orthogonal iterations, under the assumptions of our theorem, the column space of $\ensuremath{\mathbf{t}}ilde{\U}_T$ converges to that of $\ensuremath{\mathbf{t}}ilde{\U}$, the top-$L$ eigenvectors of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg^\ensuremath{\mathbf{t}}op$, with a linear convergence rate depending on the ratio $\sigma_{L+1}/\sigma_L$. In view of the relationship between $\ensuremath{\mathbf{t}}ilde{\U}_T$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$, we conclude that $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_T$ converges to the view 1 CCA projection as $T\rightarrow \infty$.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{proof}
It is interesting to note that, besides the whitening operations $\left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}^\ensuremath{\mathbf{t}}op \right)^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$, the other basic operations in each iteration of Algorithm~\ref{alg:cca-iterative} are of the form
\ensuremath{\mathbf{b}}egin{gather}\label{e:lsq}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather}
which is solving a linear least squares (regression) problem with input $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}$ and target output $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}$ satisfying $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}^\ensuremath{\mathbf{t}}op=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{I}}$, i.e.,
\ensuremath{\mathbf{b}}egin{gather*}
\ensuremath{\mathbf{m}}in_{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}_t} \ensuremath{\mathbf{q}}uad \ensuremath{\mathbf{n}}orm{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}_t^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} - \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_t}_F^2.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*}
By setting the gradient of this unconstrained objective to zero, we obtain $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}_t=(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_t^\ensuremath{\mathbf{t}}op$ and so the optimal projection $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}_t^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}$ coincides with the update \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:lsq}.
For \ensuremath{\mathbf{c}}ite{LuFoster14a}, the advantage of the alternating least squares formulation over the exact solution to CCA is that it does not need to form the high-dimensional (nonsparse) matrix $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$; instead it directly operates on the projections, which are much smaller in size, and one can solve the least squares problems using iterative algorithms that require only sparse matrix-vector multiplications.
| 1,726 | 30,864 |
en
|
train
|
0.174.7
|
\subsection{Extension to DCCA}
Our intuition for adapting Algorithm~\ref{alg:cca-iterative} to DCCA is as follows. During DCCA optimization, the DNN weights $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}})$ are updated frequently and thus the outputs $\left( \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}),\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}) \right)$, which are also the inputs to the last CCA step, also change upon each weight update. Therefore, the last CCA step needs to adapt to the fast evolving input data distribution. On the other hand, if we are updating the CCA weights $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}})$ based on a small minibatch of data (as happens in stochastic optimization), it is intuitively wasteful to solve $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}})$ to optimality rather than to make a simple update based on the minibatch. Moreover, the objective of this ``simple update'' can be used to derive a gradient estimate for $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}})$.
In view of Algorithm~\ref{alg:cca-iterative}, it is a natural choice to embed the optimization of $(\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{g}})$ into the iterative solution to linear CCA. Instead of solving the regression problem $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \rightarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}$ exactly with $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}$, we try to solve the problem $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}} \rightarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}$ on a minibatch with a gradient descent step on $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}})$ jointly (recall $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}=\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}})$ is a function of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$). Notice that this regression objective is unconstrained and decouples over training samples, so an unbiased gradient estimate for this problem can be easily derived through standard backpropagation using minibatches (however, this gradient estimate may not be unbiased for the original DCCA objective; see discussion in Section~\ref{s:related}).
The less trivial part of Algorithm~\ref{alg:cca-iterative} to implement in DCCA is the whitening operation $\left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}^\ensuremath{\mathbf{t}}op \right)^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$, which needs $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t\in\ensuremath{\mathbb{R}}^{L\ensuremath{\mathbf{t}}imes N}$, the projections of all training samples. We would like to avoid the exact computation of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$ as it requires feeding forward the entire training set $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}$ with the updated $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}$, and the computational cost of this operation is as high as (half of) the cost of evaluating the batch gradient (the latter requires both the forward and backward passes). We bypass this difficulty by noting that the only portion of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}$ needed is the updated projection of the minibatch used in the subsequent view 2 regression problem $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}} \rightarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}$ (corresponding to the step $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t+1} \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}$ in Algorithm~\ref{alg:cca-iterative}). Therefore, if we have an estimate of the covariance $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f}^t:=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}^\ensuremath{\mathbf{t}}op$ without feeding forward the entire training set, we can estimate the updated projection for this minibatch only. Specifically, we estimate this quantity by\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}ootnote{We add a small value $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}psilon>0$ to the diagonal of the covariance estimates in our implementation for numerical stability.}
\ensuremath{\mathbf{b}}egin{gather}\label{e:memory}
\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f}^{t} \leftarrow \rho\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f}^{t-1} + (1-\rho) \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{N}{\abs{b}} \ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}_b)\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}_b)^\ensuremath{\mathbf{t}}op,
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather}
where $\rho\in[0,1]$, $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}_b$ denotes a minibatch of data with index set $b$, and $\abs{b}$ denotes the size (number of samples) of this minibatch. The time constant $\rho$ controls how much the previous covariance estimate is kept in the update; a larger $\rho$ indicates forgetting the ``memory'' more slowly. Assuming that the parameters do not change much from time $t-1$ to $t$, then $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f}^{t-1}$ will be close to $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f}^{t}$, and incorporating it helps to reduce the variance from the term $\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}_b)\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}_b)^\ensuremath{\mathbf{t}}op$ when $\abs{b}\ll N$.
The update in \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:memory} has a form similar to that of the widely used momentum technique in the optimization~\ensuremath{\mathbf{c}}ite{Polyak64a} and neural network literature~\ensuremath{\mathbf{c}}ite{Sutskev_13a,Schaul_13a}, and is also used by \ensuremath{\mathbf{c}}ite{Brand06a,SantosMilidiu10a,Yger_12a} for online subspace tracking and anomaly detection. We note that the memory cost of $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f}^{t} \in \ensuremath{\mathbb{R}}^{L\ensuremath{\mathbf{t}}imes L}$ is small as we look for low-dimensional projections (small $L$) in practice. These advantages validate our choice of whitening operations over the more commonly used QR decomposition used by \ensuremath{\mathbf{c}}ite{LuFoster14a}.
| 2,922 | 30,864 |
en
|
train
|
0.174.8
|
\ensuremath{\mathbf{b}}egin{algorithm}[t]
\ensuremath{\mathbf{c}}aption{Nonlinear orthogonal iterations (NOI) for DCCA.}
\label{alg:dcca}
\renewcommand{\textbf{Input:}}{\ensuremath{\mathbf{t}}extbf{Input:}}
\renewcommand{\textbf{Output:}}{\ensuremath{\mathbf{t}}extbf{Output:}}
\ensuremath{\mathbf{b}}egin{algorithmic}
\REQUIRE Data matrix $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}\in \ensuremath{\mathbb{R}}^{D_x \ensuremath{\mathbf{t}}imes N}$, $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}\in \ensuremath{\mathbb{R}}^{D_y \ensuremath{\mathbf{t}}imes N}$. Initialization $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}})$, time constant $\rho$, learning rate $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta$.
\STATE Randomly choose a minibatch $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}_{b_0},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}_{b_0})$
\STATE $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f} \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{N}{b_0}\sum_{i\in b_0} \ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\mathbf{x}}_i)\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\mathbf{x}}_i)^\ensuremath{\mathbf{t}}op$,
\STATE $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}g\ensuremath{\mathbf{t}}g} \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{N}{b_0}\sum_{i\in b_0} \ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\mathbf{y}}_i)\ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\mathbf{y}}_i)^\ensuremath{\mathbf{t}}op$
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}OR{$t=1,2,\dots,T$}
\STATE Randomly choose a minibatch $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}_{b_t},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}_{b_t})$
\STATE $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f} \leftarrow \rho \ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f} + (1-\rho) \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{N}{\abs{b_t}}\sum_{i\in b_t} \ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\mathbf{x}}_i)\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\mathbf{x}}_i)^\ensuremath{\mathbf{t}}op$
\STATE $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}g\ensuremath{\mathbf{t}}g} \leftarrow \rho \ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}g\ensuremath{\mathbf{t}}g} + (1-\rho) \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{N}{\abs{b_t}}\sum_{i\in b_t} \ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\mathbf{y}}_i)\ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\mathbf{y}}_i)^\ensuremath{\mathbf{t}}op$
\STATE Compute the gradient $\ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}$ of the objective
\ensuremath{\mathbf{b}}egin{gather*}
\ensuremath{\mathbf{m}}in_{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}}\; \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{\abs{b_t}} \sum_{i\in b_t} \ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\mathbf{x}}_i) - \ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}g\ensuremath{\mathbf{t}}g}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}}\ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\mathbf{y}}_i) }^2
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*}
\STATE Compute the gradient $\ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}}$ of the objective
\ensuremath{\mathbf{b}}egin{gather*}
\ensuremath{\mathbf{m}}in_{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}}}\; \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{\abs{b_t}} \sum_{i\in b_t} \ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\mathbf{y}}_i) - \ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}}\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\mathbf{x}}_i) }^2
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*}
\STATE $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}} \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}} - \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta \ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}$, $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}} \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}} - \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta \ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}}$.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{E}}NDFOR
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{E}}NSURE The updated $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}})$.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{algorithmic}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{algorithm}
We give the resulting nonlinear orthogonal iterations procedure (NOI) for DCCA in Algorithm~\ref{alg:dcca}. Now adaptive whitening is used to obtain suitable target outputs of the regression problems for computing derivatives $(\ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}, \ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}})$, and we no longer maintain the whitened projections of the entire training set at each iteration.
Therefore, by the end of the algorithm, $(\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}),\ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}))$ may not satisfy the whitening constraints of \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca}. One may use an additional CCA step on $(\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}),\ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}))$ to obtain a feasible solution of the original problem if desired, and this amounts to linear transforms in $\ensuremath{\mathbb{R}}^L$ which do not change the canonical correlations between the projections for both the training and test sets.
In practice, we adaptively estimate the mean of $\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}})$ and $\ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}})$ with an update formula similar to that of \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:memory} and center the samples accordingly before estimating the covariances and computing the target outputs. We also use momentum in the stochastic gradient steps for the nonlinear least squares problems as is commonly used in the deep learning community \ensuremath{\mathbf{c}}ite{Sutskev_13a}. Overall, Algorithm~\ref{alg:dcca} is intuitively quite simple: It alternates between adaptive covariance estimation/whitening and stochastic gradient steps over (a stochastic version of) the least squares objectives, without any involved gradient computation.
| 2,747 | 30,864 |
en
|
train
|
0.174.9
|
\section{Related Work}
\label{s:related}
Stochastic (and online) optimization techniques for fundamental problems, such as principal component analysis and partial least squares, are of continuous research interest~\ensuremath{\mathbf{c}}ite{Krasul69a,OjaKarhun85a,WarmutKuzmin08a,Arora_12a,Arora_13a,Mitliag_13a,Balsub_13a,Shamir15a}. However, as pointed out by \ensuremath{\mathbf{c}}ite{Arora_12a}, the CCA objective is more challenging due to the whitening constraints.
Recently, \ensuremath{\mathbf{c}}ite{Yger_12a} proposed an adaptive CCA algorithm with efficient online updates based on matrix manifolds defined by the whitening constraints. However, the goal of their algorithm is anomaly detection rather than optimizing the canonical correlation objective for a given dataset.
Based on the alternating least squares formulation of CCA (Algorithm~\ref{alg:cca-iterative}), \ensuremath{\mathbf{c}}ite{LuFoster14a} propose an iterative solution of CCA for very high-dimensional and sparse input features, and the key idea is to solve the high dimensional least squares problems with randomized PCA and (batch) gradient descent.
\ensuremath{\mathbf{b}}egin{algorithm}[t]
\ensuremath{\mathbf{c}}aption{CCA via gradient descent over least squares. }
\label{alg:cca-gd}
\renewcommand{\textbf{Input:}}{\ensuremath{\mathbf{t}}extbf{Input:}}
\renewcommand{\textbf{Output:}}{\ensuremath{\mathbf{t}}extbf{Output:}}
\ensuremath{\mathbf{b}}egin{algorithmic}
\REQUIRE Data matrix $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}\in \ensuremath{\mathbb{R}}^{d_x \ensuremath{\mathbf{t}}imes N}$, $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}\in \ensuremath{\mathbb{R}}^{d_y \ensuremath{\mathbf{t}}imes N}$. Initialization ${\ensuremath{\mathbf{u}}}_0 \in \ensuremath{\mathbb{R}}^{d_x}$, ${\ensuremath{\mathbf{v}}}_0 \in \ensuremath{\mathbb{R}}^{d_y}$. Learning rate $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta$.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}OR{$t=1,2,\dots,T$}
\STATE ${\ensuremath{\mathbf{u}}}_t \leftarrow {\ensuremath{\mathbf{u}}}_{t-1} - \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op {\ensuremath{\mathbf{u}}}_{t-1} - \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{v}}_{t-1}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op {\ensuremath{\mathbf{v}}}_{t-1})$
\STATE ${\ensuremath{\mathbf{v}}}_t \leftarrow {\ensuremath{\mathbf{v}}}_{t-1} - \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op {\ensuremath{\mathbf{v}}}_{t-1} - \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{u}}_{t-1}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op {\ensuremath{\mathbf{u}}}_{t-1})$
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{E}}NDFOR
\STATE $\ensuremath{\mathbf{u}} \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{{\ensuremath{\mathbf{u}}}_T}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{u}}_T^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}}}$,$\ensuremath{\mathbf{q}}uad$$\ensuremath{\mathbf{v}} \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{{\ensuremath{\mathbf{v}}}_T}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{v}}_T^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}}}$
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{E}}NSURE $\ensuremath{\mathbf{u}}$/$\ensuremath{\mathbf{v}}$ are the CCA directions of view 1/2.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{algorithmic}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{algorithm}
Upon the submission of this paper, we have become aware of the very recent publication of \ensuremath{\mathbf{c}}ite{Ma_15b}, which extends \ensuremath{\mathbf{c}}ite{LuFoster14a} by solving the linear least squares problems with (stochastic) gradient descent. We notice that a specical case of our algorithm ($\rho=0$) is equivalent to theirs for linear CCA. To see this, we give the linear CCA version of our algorithm (for a one-dimensional projection, to be consistent with the notation of \ensuremath{\mathbf{c}}ite{Ma_15b}) in Algorithm~\ref{alg:cca-gd}, where we take a batch gradient descent step over the least squares objectives in each iteration. This algorithm is equivalent to Algorithm~3 of \ensuremath{\mathbf{c}}ite{Ma_15b}.\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}ootnote{Although Algorithm~3 of \ensuremath{\mathbf{c}}ite{Ma_15b} maintains two copies---the normalized and the unnormalized versions---of the weight parameters, we observe that the sole purpose of the normalized version in the intermediate iterations is to provide whitened target output for the least squares problems; our version of the algorithm eliminates this copy and the normalized version can be retrieved by a whitening step at the end.} Though intuitively very simple, the analysis of this algorithm is challenging.
In~\ensuremath{\mathbf{c}}ite{Ma_15b} it is shown that the solution to the CCA objective is a fixed point of this algorithm, but no global convergence property is given. We also notice that the gradients used in this algorithm are derived from the alternating least squares problems
\ensuremath{\mathbf{b}}egin{gather*}
\ensuremath{\mathbf{m}}in_{\ensuremath{\mathbf{u}}}\; \ensuremath{\mathbf{n}}orm{ \ensuremath{\mathbf{u}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} - \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{\ensuremath{\mathbf{v}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{v}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}}} }_F^2 \ensuremath{\mathbf{t}}ext{\ and \ } \ensuremath{\mathbf{m}}in_{\ensuremath{\mathbf{v}}}\; \ensuremath{\mathbf{n}}orm{ \ensuremath{\mathbf{v}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} - \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{\ensuremath{\mathbf{u}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{u}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}}} }_F^2,
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*}
while the true CCA objective can be written as
\ensuremath{\mathbf{b}}egin{gather*}
\ensuremath{\mathbf{m}}in_{\ensuremath{\mathbf{u}},\ensuremath{\mathbf{v}}}\; \ensuremath{\mathbf{n}}orm{ \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{\ensuremath{\mathbf{u}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{u}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}}} - \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{\ensuremath{\mathbf{v}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{v}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}}}}_F^2.
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*}
This shows that Algorithm~3 is \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}mph{not} implementing gradient descent over the CCA objective.
When extending Algorithm~3 to stochastic optimization, we observe the key differences between their algorithm and ours as follows.
Due to the evolving $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}})$, the last CCA step in the DCCA model is dealing with different $(\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}),\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}))$ and covariance structures in different iterates, even though the original inputs $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}})$ are the same; this motivates the adaptive estimate of covariances in \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:memory}. In the whitening steps of \ensuremath{\mathbf{c}}ite{Ma_15b}, however, the covariances are estimated using \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}mph{only} the current minibatch at each iterate, without consideration of the remaining training samples or previous estimates, which corresponds to $\rho\rightarrow 0$ in our estimate. \ensuremath{\mathbf{c}}ite{Ma_15b} also
suggests using a minibatch size of the order $\ensuremath{\mathcal{O}}(L)$, the dimensionality of the covariance matrices to be estimated, in order to obtain a high-accuracy estimate for whitening. As we will show in the experiments, in both CCA and DCCA, it is important to incorporate the previous covariance estimates ($\rho\rightarrow 1$) at each step to reduce the variance, especially when small minibatches are used. Based on the above analysis for batch gradient descent, solving the least squares problem with stochastic gradient descent is \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}mph{not} implementing stochastic gradient descent over the CCA objective. Nonetheless, as shown in the experiments, this stochastic approach works remarkably well and can match the performance of batch optimization, for both linear and nonlinear CCA, and is thus worth careful analysis.
Finally, we remark that other possible approaches for solving \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca} exist. Since the difficulty lies in the whitening constraints, one can relax the constraints and solve the Lagrangian formulation repeatedly with updated Lagrangian multipliers, as done by \ensuremath{\mathbf{c}}ite{LaiFyfe00a}; or one can introduce auxiliary variables and apply the quadratic penalty method \ensuremath{\mathbf{c}}ite{NocedalWright06a}, as done by \ensuremath{\mathbf{c}}ite{CarreirWang14b}. The advantage of such approaches is that there exists no coupling of all training samples when optimizing the primal variables (the DNN weight parameters) and thus one can easily apply SGD there, but one also needs to deal with the Lagrange multipliers or to set a schedule for the quadratic penalty parameter (which is non-trivial) and alternately optimize over two sets of variables repeatedly in order to obtain a solution of the original constrained problem.
\ensuremath{\mathbf{b}}egin{table}[t]
\ensuremath{\mathbf{c}}entering
\ensuremath{\mathbf{c}}aption{Statistics of two real-world datasets.}
\label{t:datasets}
\ensuremath{\mathbf{b}}egin{tabular}{|c||c|c|c|}
\ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line
dataset & training/tuning/test & $L$ & DNN architectures \\ \ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line
JW11 & 30K/11K/9K & 112 & \ensuremath{\mathbf{c}}aja{c}{c}{273-1800-1800-112\\ensuremath{\mathbf{1}}12-1200-1200-112} \\ \ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line
MNIST & 50K/10K/10K & 50 & \ensuremath{\mathbf{c}}aja{c}{c}{392-800-800-50\\392-800-800-50} \\
\ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{tabular}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{table}
| 3,839 | 30,864 |
en
|
train
|
0.174.10
|
\section{Experiments}
\label{s:experiments}
\subsection{Experimental setup}
We now demonstrate the NOI algorithm on the two real-world datasets used by \ensuremath{\mathbf{c}}ite{Andrew_13a} when introducing DCCA. The first dataset is a subset of the University of Wisconsin X-Ray Microbeam corpus ~\ensuremath{\mathbf{c}}ite{Westbur94a}, which consists of simultaneously recorded acoustic and articulatory measurements during speech. Following \ensuremath{\mathbf{c}}ite{Andrew_13a,Wang_15a}, the acoustic view inputs are 39D Mel-frequency cepstral coefficients and the articulatory view inputs are horizontal/vertical displacement of 8 pellets attached to different parts of the vocal tract, each then concatenated over a 7-frame context window, for speaker `JW11'. The second dataset consists of left/right halves of the images in the MNIST dataset~\ensuremath{\mathbf{c}}ite{Lecun_98a}, and so the input of each view consists of $28\ensuremath{\mathbf{t}}imes 14$ grayscale images. We do not tune neural network architectures as it is out of the scope of this paper. Instead, we use DNN architectures similar to those used by \ensuremath{\mathbf{c}}ite{Andrew_13a} with ReLU activations~\ensuremath{\mathbf{c}}ite{NairHinton10a}, and we achieve better generalization performance with these architectures mainly due to better optimization. The statistics of each dataset and the chosen DNN architectures (widths of input layer-hidden layers-output layer) are given in Table~\ref{t:datasets}. The projection dimensionality $L$ is set to 112/50 for JW11/MNIST respectively as in \ensuremath{\mathbf{c}}ite{Andrew_13a}; these are also the maximum possible total canonical correlations for the two datasets.
We compare three optimization approaches: full batch optimization by L-BFGS~\ensuremath{\mathbf{c}}ite{Andrew_13a}, using the implementation of \ensuremath{\mathbf{c}}ite{Schmid12a} which includes a good line-search procedure; stochastic optimization with large minibatches~\ensuremath{\mathbf{c}}ite{Wang_15a}, denoted STOL; and our algorithm, denoted NOI. We create training/tuning/test splits for each dataset and measure the total canonical correlations on the test sets (measured by linear CCA on the projections) for different optimization methods. Hyperparameters of each algorithm, including $\rho$ for NOI, minibatch size $n=\abs{b_1}=\abs{b_2},\dots$, learning rate $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta$ and momentum $\ensuremath{\mathbf{m}}u$ for both STOL and NOI, are chosen by grid search on the tuning set. All methods use the same random initialization for DNN weight parameters. We set the maximum number of iterations to $300$ for L-BFGS and number of epochs (one pass over the training set) to $50$ for STOL and NOI.
\ensuremath{\mathbf{b}}egin{table*}[!t]\ensuremath{\mathbf{c}}entering
\ensuremath{\mathbf{c}}aption{Total test set canonical correlation obtained by different algorithms.}
\label{t:corr}
\ensuremath{\mathbf{b}}egin{tabular}{|c|c|c|c|c|c|c|c|}\ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line
\ensuremath{\mathbf{m}}ultirow{2}{*}{dataset} & \ensuremath{\mathbf{m}}ultirow{2}{*}{L-BFGS} & \ensuremath{\mathbf{m}}ulticolumn{2}{|c}{STOL} & \ensuremath{\mathbf{m}}ulticolumn{4}{|c|}{NOI} \\ \ensuremath{\mathbf{c}}line{3-8}
&& $n=100$ & $n=500$ & $n=10$ & $n=20$ & $n=50$ & $n=100$ \\ \ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line
JW11 & 78.7 & 33.0 & 86.7 & 83.6 & 86.9 & 87.9 & 89.1 \\ \ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line
MNIST & 47.0 & 26.1 & 47.0 & 45.9 & 46.4 & 46.4 & 46.4 \\ \ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{tabular}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{table*}
\ensuremath{\mathbf{b}}egin{figure}[t]
\ensuremath{\mathbf{c}}entering
\ensuremath{\mathbf{b}}egin{tabular}{@{}c@{\ensuremath{\ensuremath{\mathbf{m}}athbf{h}}space{0.03\linewidth}}c@{}}
JW11 & MNIST \\[.5ex]
\ensuremath{\mathbf{p}}sfrag{corr}[][]{Canon. Corr.}
\ensuremath{\mathbf{p}}sfrag{iteration}[t][]{epoch}
\ensuremath{\mathbf{p}}sfrag{LBFGS n=N}[l][l][0.52]{L-BFGS $n\!=\!N$}
\ensuremath{\mathbf{p}}sfrag{STOL n=100}[l][l][0.55]{STOL $n\!=\!100$}
\ensuremath{\mathbf{p}}sfrag{STOL n=500}[l][l][0.55]{STOL $n\!=\!500$}
\ensuremath{\mathbf{p}}sfrag{NOI n=10}[l][l][0.55]{NOI $n\!=\!10$}
\ensuremath{\mathbf{p}}sfrag{NOI n=20}[l][l][0.55]{NOI $n\!=\!20$}
\ensuremath{\mathbf{p}}sfrag{NOI n=50}[l][l][0.55]{NOI $n\!=\!50$}
\ensuremath{\mathbf{p}}sfrag{NOI n=100}[l][l][0.55]{NOI $n\!=\!100$}
\includegraphics[width=0.50\linewidth]{JW11_varyb.eps} &
\ensuremath{\mathbf{p}}sfrag{iteration}[t][]{epoch}
\includegraphics[width=0.47\linewidth]{MNIST_varyb.eps}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{tabular}
\ensuremath{\mathbf{c}}aption{Learning curves of different algorithms on tuning sets with different minibatch size $n$.}
\label{f:varyn}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{figure}
\subsection{Effect of minibatch size $n$}
In the first set of experiments, we vary the minibatch size $n$ of NOI over $\{10,20,50,100\}$, while tuning $\rho$, $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta$ and $\ensuremath{\mathbf{m}}u$. Learning curves (objective value vs.~number of epochs) on the tuning set for each $n$ with the corresponding optimal hyperparameters are shown in Fig.~\ref{f:varyn}. For comparison, we also show the learning curves of STOL with $n=100$ and $n=500$, while $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta$ and $\ensuremath{\mathbf{m}}u$ are also tuned by grid search. We observe that STOL performs very well at $n=500$ (with the performance on MNIST being somewhat better due to higher data redundancy), but it can not achieve much progress in the objective over the random initialization with $n=100$, for the reasons described earlier. In contrast, NOI achieves very competitive performance with various small minibatch sizes, with fast improvement in objective during the first few iterations, although larger $n$ tends to achieve slightly higher correlation on tuning/test sets eventually. Total canonical correlations on the test sets are given in Table~\ref{t:corr}, showing that we achieve better results than \ensuremath{\mathbf{c}}ite{Andrew_13a} with similar DNN architectures.
\subsection{Effect of time constant $\rho$}
\ensuremath{\mathbf{b}}egin{figure}[t]
\ensuremath{\mathbf{c}}entering
\ensuremath{\mathbf{p}}sfrag{0}[][][.47]{$0$}
\ensuremath{\mathbf{p}}sfrag{0.2}[][][.47]{$0.2$}
\ensuremath{\mathbf{p}}sfrag{0.4}[][][.47]{$0.4$}
\ensuremath{\mathbf{p}}sfrag{0.6}[][][.47]{$0.6$}
\ensuremath{\mathbf{p}}sfrag{0.8}[][][.47]{$0.8$}
\ensuremath{\mathbf{p}}sfrag{0.9}[][][.47]{$0.9$}
\ensuremath{\mathbf{p}}sfrag{0.99}[][][.47]{$0.99$}
\ensuremath{\mathbf{p}}sfrag{0.999}[][][.47]{$\, 0.999$}
\ensuremath{\mathbf{p}}sfrag{0.9999}[][][.47]{$\ensuremath{\mathbf{q}}uad 0.9999$}
\ensuremath{\mathbf{p}}sfrag{1}[][][.47]{$\; 1$}
\ensuremath{\mathbf{b}}egin{tabular}{@{}c@{\ensuremath{\ensuremath{\mathbf{m}}athbf{h}}space{0.05\linewidth}}c@{}}
JW11 & MNIST \\[1ex]
\ensuremath{\mathbf{p}}sfrag{corr}[][]{Canon. Corr.}
\ensuremath{\mathbf{p}}sfrag{rho}[t][]{$\rho$}
\ensuremath{\mathbf{p}}sfrag{n=10}[l][l][0.55]{$n\!=\!10$}
\ensuremath{\mathbf{p}}sfrag{n=20}[l][l][0.55]{$n\!=\!20$}
\ensuremath{\mathbf{p}}sfrag{n=50}[l][l][0.55]{$n\!=\!50$}
\ensuremath{\mathbf{p}}sfrag{n=100}[l][l][0.55]{$n\!=\!100$}
\includegraphics[width=0.49\linewidth]{JW11_varyr.eps} &
\ensuremath{\mathbf{p}}sfrag{rho}[t][]{$\rho$}
\includegraphics[width=0.46\linewidth]{MNIST_varyr.eps}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{tabular}
\ensuremath{\mathbf{c}}aption{Total correlation achieved by NOI on tuning sets with different $\rho$.}
\label{f:varyr}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{figure}
In the second set of experiments, we demonstrate the importance of $\rho$ in NOI for different minibatch sizes. The total canonical correlations achieved by NOI on the tuning set for $\rho=\{0,\, 0.2,\, 0.4,\, 0.6,\, 0.8,\, 0.9,\, 0.99,\, 0.999,\, 0.9999\}$ are shown in Fig.~\ref{f:varyr}, while other hyper-parameters are set to their optimal values. We confirm that for relatively large $n$, NOI works reasonably well with $\rho=0$ (so we are using the same covariance estimate/whitening as \ensuremath{\mathbf{c}}ite{Ma_15b}). But also as expected, when $n$ is small, it is beneficial to incorporate the previous estimate of the covariance because the covariance information contained in each small minibatch is noisy. Also, as $\rho$ becomes too close to $1$, the covariance estimates are not adapted to the DNN outputs and the performance of NOI degrades. Moreover, we observe that the optimal $\rho$ value seems different for each $n$.
\ensuremath{\mathbf{b}}egin{figure}[t]
\ensuremath{\mathbf{c}}entering
\ensuremath{\mathbf{p}}sfrag{0}[][][.7]{$0$}
\ensuremath{\mathbf{p}}sfrag{0.2}[][][.7]{$0.2$}
\ensuremath{\mathbf{p}}sfrag{0.4}[][][.7]{$0.4$}
\ensuremath{\mathbf{p}}sfrag{0.6}[][][.7]{$0.6$}
\ensuremath{\mathbf{p}}sfrag{0.8}[][][.7]{$0.8$}
\ensuremath{\mathbf{p}}sfrag{0.9}[][][.7]{$0.9$}
\ensuremath{\mathbf{p}}sfrag{0.99}[][][.7]{$0.99$}
\ensuremath{\mathbf{p}}sfrag{0.999}[][][.7]{$\, 0.999$}
\ensuremath{\mathbf{p}}sfrag{0.9999}[][][.7]{$\ensuremath{\mathbf{q}}uad 0.9999$}
\ensuremath{\mathbf{p}}sfrag{corr}[][]{Canon. Corr.}
\ensuremath{\mathbf{p}}sfrag{rho}[][]{$\rho$}
\ensuremath{\mathbf{p}}sfrag{Initialization}[l][l][0.8]{Random Init.}
\ensuremath{\mathbf{p}}sfrag{SVD}[l][l][0.8]{SVD}
\ensuremath{\mathbf{p}}sfrag{STOL n=500}[l][l][0.8]{STOL $n\!=\!500$}
\ensuremath{\mathbf{p}}sfrag{NOI n=1}[l][l][0.8]{NOI $n\!=\!1$}
\includegraphics[width=0.8\linewidth]{MNISTCCA.eps}
\ensuremath{\mathbf{c}}aption{Pure stochastic optimization of linear CCA using NOI. We show total correlation achieved by NOI with $n=1$ on the MNIST training sets at different $\rho$, by the random initialization used by NOI, by the exact solution, and by STOL with $n=500$.}
\label{f:cca-noi}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{figure}
\subsection{Pure stochastic optimization for CCA}
Finally, we carry out pure stochastic optimization ($n=1$) for linear CCA on the MNIST dataset. Notice that linear CCA is a special case of DCCA with $(\ensuremath{\mathbf{t}}ilde{\f},\ensuremath{\mathbf{t}}ilde{\g})$ both being single-layer linear networks (although we have used small weight-decay terms for the weights, leading to a slightly different objective than that of CCA). Total canonical correlations achieved by STOL with $n=500$ and by NOI (50 training epochs) on the training set with different $\rho$ values are shown in Fig.~\ref{f:cca-noi}. The objective of the random initialization and the closed-form solution (by SVD) are also shown for comparison. NOI could not improve over the random initialization without memory ($\rho=0$, corresponding to the algorithm of \ensuremath{\mathbf{c}}ite{Ma_15b}), but gets very close to the optimal solution and matches the objective obtained by the previous large minibatch approach when $\rho\rightarrow 1$. This result demonstrates the importance of our adaptive estimate \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:memory} also for CCA.
| 3,937 | 30,864 |
en
|
train
|
0.174.11
|
\section{Conclusions}
\label{s:conclusion}
In this paper, we have proposed a stochastic optimization algorithm NOI for training DCCA which updates the DNN weights based on small minibatches and performs competitively to previous optimizers.
One direction for future work is to better understand the convergence properties of NOI, which presents several difficulties. First, we note that convergence of the alternating least squares formulation of CCA (Algorithm~\ref{alg:cca-iterative}, or rather orthogonal iterations) is usually stated as the angle between the estimated subspace and the ground-truth subspace converging to zero. In the stochastic optimization setting, we need to relate this measure of progress (or some other measure) to the nonlinear least squares problems we are trying to solve in the NOI iterations. As discussed in Section~\ref{s:related}, even the convergence
of the linear CCA version of NOI with batch gradient descent is not well understood~\ensuremath{\mathbf{c}}ite{Ma_15b}. Second, the use of memory in estimating covariances \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:memory} complicates the analysis and ideally we would like to come up with ways of determining the time constant $\rho$.
We have also tried using the same form of adaptive covariance estimates in both views for the STOL approach for computing the gradients \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:gradient}, but its performance with small minibatches is much worse than that of NOI. Presumably this is because the gradient computation of STOL suffers from noise in both views which are further combined through various nonlinear operations, whereas the noise in the gradient computation of NOI only comes from the output target (due to inexact whitening), and as a result NOI is more tolerant to the noise resulting from using small minibatches. This deserves further analysis as well.
\ensuremath{\mathbf{b}}ibliographystyle{IEEEtran}
\ensuremath{\mathbf{b}}ibliography{allerton15a}
\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{document}
| 506 | 30,864 |
en
|
train
|
0.175.0
|
\begin{document}
\title{Infinitely many roots of unity are zeros of some Jones polynomials}
\author{Maciej Mroczkowski}
\address{Institute of Mathematics\\
Faculty of Mathematics, Physics and Informatics\\
University of Gdansk, 80-308 Gdansk, Poland\\
e-mail: [email protected]}
\begin{abstract}
Let $N=2n^2-1$ or $N=n^2+n-1$, for any $n\ge 2$. Let $M=\frac{N-1}{2}$.
We construct families of prime knots with Jones polynomials $(-1)^M\sum_{k=-M}^{M} (-1)^kt^k$. Such polynomials have Mahler measure equal to $1$. If $N$ is prime, these are cyclotomic polynomials $\Phi_{2N}(t)$, up to some shift in the powers of $t$. Otherwise, they are products of such polynomials, including $\Phi_{2N}(t)$. In particular, all roots of unity $\zeta_{2N}$ occur as roots of Jones polynomials. We also show that some roots of unity cannot be zeros of Jones polynomials.
\end{abstract}
\maketitle
\let\thefootnote\relax\footnotetext{Mathematics Subject Classification 2020: 57K10, 57K14}
\section{Introduction}
We study knots $K$ with Jones polynomials that have Mahler measure equal to $1$, or $M(V_{K}(t))=1$.
Such knots were considered in~\cite{CK,CK2}. A Laurent polynomial $P$, with $M(P)=1$, has the form: $t^a$ times a product of cyclotomic polynomials $\Phi_n$, $a\in\mathbb{Z}$. Following~\cite{CK2}, we call such $P$ {\it cyclotomic}.
The motivation for studying such knots comes from some observed connections between the Mahler measure of the Jones polynomial of a knot and its hyperbolic volume~\cite{CK,CK2}. The Mahler measure of Jones polynomials has also been studied in~\cite{T1, T2}. In a more general context, not much is known about the question: what polynomials are Jones polynomials? This is in contrast to the Alexander polynomial: there are simple conditions on a polynomial which are sufficient and necessary for it to be the Alexander polynomial of a knot. Studying the locus of the zeros of Jones polynomials is part of the general question.
It is shown in \cite{JZDT} that this locus is dense in $\mathbb{C}$. Our result implies that the locus intersected with the unit circle is dense in the unit circle.
Near the end of~\cite{CK}, after listing all knots up to $16$ crossings with cyclotomic Jones polynomials (there are only $17$ such knots), the following problem is posed:
``An interesting open question is how to construct more knots with $M(V_K(t))=1$.''
In this paper, we construct four inifinite families of knots with cyclotomic Jones polynomials of a particularly simple form.
They include $4_1$ and $9_{42}$ with $V_{4_1}(t)=t^{-2}-t^{-1}+1-t+t^2$ and $V_{9_{42}}(t)=t^{-3}-t^{-2}+t^{-1}-1+t-t^2+t^3$. The Jones polynomials of the other knots extend these two examples. Their coefficients are finite sequences of alternating $1$'s and $-1$'s, starting and ending with $1$. In particular, there is no bound on the span of such Jones polynomials.
In fact, polynomials with alternating $1$ and $-1$'s as coefficients, starting and ending with $1$ are obtained as follows. Let $m$ be odd. It is well known that, for any $a\in N$, $t^a-1=\prod_{k|a}\Phi_k(t)$.
Hence,
\[t^{2m}-1=\prod_{k|2m}\Phi_k(t)=\prod_{k|m}\Phi_k(t)\prod_{k|m}\Phi_{2k}(t)=(t^m-1)\prod_{k|m}\Phi_{2k}(t)\]
It follows, that $\prod_{k|m}\Phi_{2k}(t)=t^m+1=(t+1)(t^{m-1}-t^{m-2}+t^{m-3}+\ldots-t+1)$.
Since $\Phi_2(t)=t+1$, we get:
\[\prod_{k|m,k>1}\Phi_{2k}(t)=t^{m-1}-t^{m-2}+t^{m-3}+\ldots-t+1\]
For $m$ odd, we introduce the following notation:
\[\widetilde{\Phi}_{2m}(t):=t^{-\frac{m-1}{2}}\prod_{k|m,k>1}\Phi_{2k}(t)=t^{-\frac{m-1}{2}}-t^{-\frac{m-1}{2}+1}+\ldots-
t^{\frac{m-1}{2}-1}+t^{\frac{m-1}{2}}\]
We allow $m=1$ with $\widetilde{\Phi}_2(t)=1$.
We will construct knots with cyclotomic Jones polynomials equal to $\widetilde{\Phi}_{2m}$ for infinitely many odd $m$.
Notice, that $\frac{d}{dt}[t^nP(t)]_{t=1}=P'(1)+n$ for any $n\in\mathbb{Z}$,
if $P$ is a Laurent polynomial satisfying $P(1)=1$. If $P$ is a Jones polynomial of a knot, it satisfies $P(1)=1$ and $P'(1)=0$ (see \cite{J}),
hence $t^nP$ cannot be a Jones polynomial for $n\neq 0$.
We say that a Laurent polynomial $P$ is {\it palindromic}, if $P(t^{-1})=t^nP(t)$, for some $n\in \mathbb{Z}$; in particular, if $n=0$,
we say that it is {\it symmetric}. One checks that $P'(1)=0$, if $P$ is symmetric. Hence, a palindromic Jones polynomial of a knot must be symmetric.
For $n\ge 3$, cyclotomic polynomials $\Phi_n$ are palindromic of even degree, but not symmetric (since they are not Laurent polynomials). In order to make them symmetric, we multpily $\Phi_n$ with $t^{-\frac{\varphi(n)}{2}}$ (where $\varphi$ is the Euler totient function and $\varphi(n)$ is the degree of $\Phi_n$). For $n\ge 3$, we use the notation:
\[\Phi^{sym}_n(t)=t^{-\frac{\varphi(n)}{2}}\Phi_n(t)\]
For example $\Phi^{sym}_{10}(t)=t^{-2}-t^{-1}+1-t+t^2$
and $\Phi^{sym}_{14}(t)=t^{-3}-t^{-2}+t^{-1}-1+t-t^2+t^3$. Notice, that $\Phi^{sym}_n$ and $\Phi_n$ have the same roots.
One checks that the formula for $\widetilde{\Phi}_{2m}(t)$ given above, $m$ odd, simplifies to:
\[\widetilde{\Phi}_{2m}(t):=\prod_{k|m,k>1}\Phi^{sym}_{2k}(t)\]
The paper is organized as follows: in section~\ref{sec:main} the main theorems are stated, while the notion of arrow diagrams and some proofs are postponed to two latter sections~\ref{sec:arrows} and \ref{sec:proofs}.
| 1,879 | 16,165 |
en
|
train
|
0.175.1
|
\section{Main results}\label{sec:main}
Let $W_{n,k}$, $n,k\in\mathbb{Z}$, $k\ge 0$, be the knot shown for $n=2$ and $k=3$ in Figure~\ref{fig:wnk}, in the form of an {\it arrow diagram}. In general, there are $n$ arrows on the left kink, and $k$ arrows arranged on $k$ strands, generalizing in an obvious way the case $k=3$ shown in this figure. When $n<0$, there are $|n|$ clockwise arrows on the left kink.
In short, the arrows correspond to fibers in the Hopf fibration of $S^3$. A detailed explanation of arrow diagrams is postponed to section~\ref{sec:arrows}. In section~\ref{sec:proofs}, we compute the Jones polynomials of the knots $W_{n,k}$, denoted $V_{W_{n,k}}$:
\begin{figure}
\caption{$W_{2,3}
\label{fig:wnk}
\end{figure}
\begin{theorem}\label{thm:jonesWnk}
Let $n,k\in\mathbb{Z}$, $k\ge 0$. Then,
\begin{align*}
V_{W_{n,k}}=&\frac{t^{\frac{n(n-1)}{2}+k(k-1)-2nk}}{t^2-1}\left(-t^{(k+2)n+1}+t^{(k+1)(n+1)}(t^{k+1}+1)\right.\\
&\left.-t^{k(n+3)+1}+t-1\right)
\end{align*}
\end{theorem}
Denote by $D_{n,k}$ the terms in the big parenthesis in the formula for $V_{W_{n,k}}$ above. We want to check for which $n,k$ the polynomial $V_{W_{n,m}}$ is symmetric, i.e. $V_{W_{n,m}}(t^{-1})=V_{W_{n,m}}(t)$.
It is easy to see that a necessary condion is that $D_{n,k}(t^{-1})=-t^aD_{n,k}(t)$ for some $a\in\mathbb{Z}$.
Say that such $D_{n,k}$ is {\it antipalindromic}.
For $k=0$, $W_{n,0}$ is an oval with $n\in\mathbb{Z}$ arrows on it. Such knots are torus knots, trivial if and only if $n\in\{1,0,-1,-2\}$, see~\cite{M1}. Thus, when $W_{n,0}$ is non trivial, its Jones polynomial is not symmetric.
\begin{theorem}\label{thm:cyclojones}
Suppose that $k>0$. The polynomial $V_{n,k}(t)$ is symmetric if and only if $n=k-1$, $k$, $2k$ or $2k+1$.
Furthermore, let $f(k)=k^2+k-1$ and $g(k)=2k^2-1$. Then, for $k>0$,
\[V_{W_{k-1,k}}(t)=\widetilde{\Phi}_{2f(k)}(t)\]
\[V_{W_{k,k}}(t)=\widetilde{\Phi}_{2f(k+1)}(t)\]
\[V_{W_{2k,k}}(t)=V_{W_{2k+1,k}}(t)=\widetilde{\Phi}_{2g(k+1)}(t)\]
\begin{proof}
We consider $D_{n,k}$. First we check when $t$ or $-1$ cancels with some other term.
The term $t$ cancels with a term $-t^a$, if $a=1$. This occurs when $n=0$ or $n=-3$.
The term $-1$ cancels if $n=-1$ or $n=-2$. One checks, that $D_{n,k}$ is not antipalindromic in all these cases,
except for $(n,k)=(0,1)$. That $W_{0,1}$ is trivial is very easy to check, see section~\ref{sec:arrows}. Notice, that
$\widetilde{\Phi}_{2f(1)}=\widetilde{\Phi}_2=1$.
Suppose now that $n$ and $k$ are such that neither $t$ nor $-1$ cancels, hence $n\ge 1$ or $n\le -4$.
Let $n_1=(k+2)n+1$, $n_2=k(n+3)+1$, $p_1=(k+1)(n+1)$ and $p_2=(k+1)(n+2)$ be the exponents of
the four remaining terms in $D_{n,k}$ (two negative and two positive ones).
Suppose that $n\le -4$. The four exponents are negative with the exception $n_2=0$ for $(n,k)=(-4,1)$. The highest terms are $t-1$ or $t-2$, the lowest is $-t^{n_1}$ and the gap between $n_1$ and any other exponent it at least $5$. Hence, $D_{n,k}$ is
not antipalindromic.
Suppose that $n\ge 1$. The four exponents are greater or equal to $4$. The four terms cannot all cancel out, since otherwise $V_{W_{n,k}}$ would not be a Laurent polynomial (one may also check case by case that, if a pair of terms cancels, another pair
does not cancel). Since there is a gap $1$ in the powers of $t$ and $-1$, in order for $D_{n,k}$ to be antipalindromic, it should contain another pair of terms $t^a-t^{a-1}$ for some $a\ge 5$. One has:
\[p_1-n_1=k-n,\quad p_2-n_2=n-k+1,\quad p_2-n_1=2k-n+1,\quad p_1-n_2=n-2k\]
One of these four differences has to be equal to $1$, which gives four cases:
\begin{itemize}
\item $p_1-n_1=k-n=1$. Then, $p_2=n_2$ and these $2$ terms cancel out. Now:
\[D_{k-1,k}=t^{n_1}(t-1)+t-1=(t^{k^2+k-1}+1)(t-1)\]
Let $m=k^2+k-1$. One checks that:
\[V_{W_{k-1,k}}=\frac{t^{-\frac{m-1}{2}}}{t^2-1}(t^m+1)(t-1)=\widetilde{\Phi}_{2m}=\widetilde{\Phi}_{2f(k)}\]
\item $p_2-n_2=n-k+1=1$. Then, $p_1=n_1$ and:
\[D_{k,k}=t^{n_2}(t-1)+t-1=(t^{k^2+3k+1})(t-1)\]
Let $m=k^2+3k+1$. Again, one checks that:
\[V_{W_{k,k}}=\widetilde{\Phi}_{2m}=\widetilde{\Phi}_{2f(k+1)}\]
\item $p_2-n_1=2k-n+1=1$. Then $p_1=n_2$ and:
\[D_{2k,k}=t^{n_1}(t-1)+t-1=(t^{2k^2+4k+1})(t-1)\]
Let $m=2k^2+4k+1$. One checks that:
\[V_{W_{2k,k}}=\widetilde{\Phi}_{2m}=\widetilde{\Phi}_{2g(k+1)}\]
\item $p_1-n_2=n-2k=1$. Then, $p_2=n_1$ and:
\[D_{2k+1,k}=t^{n_2}(t-1)+t-1=(t^{2k^2+4k+1})(t-1)\]
Let $m=2k^2+4k+1$. One checks that:
\[V_{W_{2k+1,k}}=\widetilde{\Phi}({2m})=\widetilde{\Phi}_{2g(k+1)}\]
\end{itemize}
\end{proof}
\end{theorem}
As an immedaite consequence, we get:
\begin{theorem}
There are infinitely many roots of unity that are zeros of Jones polynomials. Such roots are dense in the unit circle.
\begin{proof}
Since for any odd $m$, $\Phi_{2m}|\widetilde{\Phi}_{2m}$, one has: for any $k>0$, $\zeta_{2f(k)}$, $\zeta_{2f(k+1)}$, $\zeta_{2g(k+1)}$ are zeros of some Jones polynomials.
Since $t^{2m}-1=(t^m-1)(t+1)\widetilde{\Phi}_{2m}t^{\frac{m-1}{2}}$, the roots of $\widetilde{\Phi}_{2m}$ are:
\[\zeta_{2m}, \zeta_{2m}^3\ldots\zeta_{2m}^{m-2},\zeta_{2m}^{m+2}\ldots \zeta_{2m}^{2m-1}\]
It is clear that the roots of the $\widetilde{\Phi}_{2m}$'s, that are Jones polynomials, are dense in the unit circle, since there are infinitely many such $\widetilde{\Phi}_{2m}$'s.
\end{proof}
\end{theorem}
The knots appearing in Theorem~\ref{thm:cyclojones} come in quadruplets for $k=1,2,3...$
In Table~\ref{tab:k4}, are shown the first four quadruplets, their Jones polynomials and the crossing numbers
(together with identification for knots up to $15$ crossings; also, the knot $W_{3,1}$ is $16n_{207543}$).
\begin{table}[h]
\begin{center}
\label{tab:k4}
\caption{$W_{n,k}$ with cyclotomic Jones polynomials up to $k=4$}
\begin{tabular}{c|c|c||c|c|c||c|c|c||c|c|c}
$K$&$V_K$&$c(K)$&$K$&$V_K$&$c(K)$&$K$&$V_K$&$c(K)$&$K$&$V_K$&$c(K)$\\
\hline
$W_{0,1}$ & $1$& $0_1$ & $W_{1,1}$ & $\Phi^{\scriptsize{sym}}_{10}$ & $4_1$ &
$W_{2,1}$ & $\Phi^{sym}_{14}$ & $9_{42}$ & $W_{3,1}$ & $\Phi^{sym}_{14}$ & $16$\\
\hline
$W_{1,2}$ & $\Phi^{sym}_{10}$ &$11n_{19}$ & $W_{2,2}$ & $\Phi^{sym}_{22}$ & $\le 18$ &
$W_{4,2}$ & $\Phi^{sym}_{34}$ & $\le 38$ & $W_{5,2}$ & $\Phi^{sym}_{34}$& $\le 52$\\
\hline
$W_{2,3}$ & $\Phi^{sym}_{22}$ &$\le 31$& $W_{3,3}$ & $\Phi^{sym}_{38}$ &$\le 43$& $W_{6,3}$ & $\Phi^{sym}_{62}$ &$\le 89$& $W_{7,3}$ & $\Phi^{sym}_{62}$ &$\le 108$\\
\hline &&&&&&&&&&&\\[-10pt]
$W_{3,4}$ & $\Phi^{sym}_{38}$ &$\le 64$& $W_{4,4}$ & $\Phi^{sym}_{58}$ &$\le 79$&
$W_{8,4}$ & $\widetilde{\Phi}_{98}$ &$\le 159$& $W_{9,4}$ & $\widetilde{\Phi}_{98}$ &$\le 184$
\end{tabular}
\end{center}
\end{table}
Notice that $98$ is the first index that is not twice a prime, hence $\widetilde{\Phi}_{98}\neq\Phi^{sym}_{98}$.
The knots with $k>1$, except $W_{1,2}$, have more than $16$ crossings (since their Jones polynomials do not appear
up to $16$ crossings) and their crossing number seems to increase rapidely. From Lemma~\ref{lem:upperc} below, $c(W_{n,k})<=k^2+(n+k)^2-1$. Using Knotscape\cite{HT} (after removing the arrows in the diagrams, see section~\ref{sec:arrows}), the number of crossings can sometimes be reduced by $1$ or $2$. Knotscape handles diagrams up to $49$ crossings and allows
to check that the Alexander polynomial differentiates $W_{2,2}$ from $W_{2,3}$. Since $W_{5,2}$ has a diagram with $52$ crossings, its Alexander polynomial cannot be computed with Knotscape (in order to check whether it is different from $W_{4,2}$). Though it seems unlikely, it is possible that some $W_{2k,k}$ is the same knot as $W_{2k+1,k}$ and/or some $W_{k,k}$
is the same knot as $W_{k,k+1}$.
As an example, using Knotscape on a diagram of $W_{4,2}$ with $39$ crossings one gets a reduction to $38$ crossings
with the following DT code:
\texttt{38 1}\;\;\;\;\;\;\;\;\texttt{6 -14 16 -28 26 -42 68 40 -50 -60 -34 36 -44 -54 52 62 -20 46
-58 -4 2 56 64 -22 70 -76 8 -10 -66 -48 72 -74 24 12 38 18 -32 30 (best available reduction)}
We turn now to some properties of the knots $W_{n,k}$.
\begin{proposition}\label{prop:11}
The knots $W_{n,k}$ are $(1,1)$ knots. In particular they have tunnel number $1$, hence they
are prime.
\begin{proof}
We postpone the proof that $W_{n,k}$ are $(1,1)$ knots to section~\ref{sec:arrows}. Now $(1,1)$ knots have tunnel number $1$ (see~\cite{D}), hence they are prime (see~\cite{N, S}).
\end{proof}
\end{proposition}
\begin{proposition}
The knots $W_{n,k}$ with cyclotomic Jones polynomials are non alternating except for $0_1$ and $4_1$. There is no bound on the twist number of such knots.
\begin{proof}
Suppose that $W_{n,k}$ is non trivial and alternating with Jones polynomial equal to some $\widetilde{\Phi}_{2m}$. From~\cite{DL}, its twist number equals $2$. Such knot can be either a connected sum of torus knots of type $(2,m)$ and $(2,n)$ or a 2-bridge knot. From Proposition~\ref{prop:11} $W_{k,n}$ is prime, so it has to be a 2-bridge knot. Using an explicit formula for Jones polynomials of 2-bridge knots with twist number $2$ in \cite{QYA}, one checks easily, that all such knots, except $4_1$, have Jones polynomials that are not equal to $\widetilde{\Phi}_{2m}$ for any $m$.
For the second part, it is shown in~\cite{CK2}, that for a family of links with cyclotomic Jones polynomials of unbounded span, there is no bound on the twist numbers of these links.
\end{proof}
| 3,977 | 16,165 |
en
|
train
|
0.175.2
|
\end{proposition}
\begin{proposition}
The knots $W_{n,k}$ with cyclotomic Jones polynomials are non alternating except for $0_1$ and $4_1$. There is no bound on the twist number of such knots.
\begin{proof}
Suppose that $W_{n,k}$ is non trivial and alternating with Jones polynomial equal to some $\widetilde{\Phi}_{2m}$. From~\cite{DL}, its twist number equals $2$. Such knot can be either a connected sum of torus knots of type $(2,m)$ and $(2,n)$ or a 2-bridge knot. From Proposition~\ref{prop:11} $W_{k,n}$ is prime, so it has to be a 2-bridge knot. Using an explicit formula for Jones polynomials of 2-bridge knots with twist number $2$ in \cite{QYA}, one checks easily, that all such knots, except $4_1$, have Jones polynomials that are not equal to $\widetilde{\Phi}_{2m}$ for any $m$.
For the second part, it is shown in~\cite{CK2}, that for a family of links with cyclotomic Jones polynomials of unbounded span, there is no bound on the twist numbers of these links.
\end{proof}
\end{proposition}
We turn now to some obstructions for roots of unity being zeros of Jones polynomials.
It is well known that the Jones polynomial has special values in $1$, $\zeta_3$, $i$ and $\zeta_6$, see~\cite{J, LM}.
For a knot $K$, $V_{K}(1)=V_{K}(\zeta_3)=1$, $V_K(\zeta_4)=\pm 1$ and $V_K(\zeta_6)=\pm(i\sqrt{3})^n$, $n\in\mathbb{N}o$.
This allows to exclude some roots of unity as zeros of Jones polynomials:
\begin{theorem}\label{thm:excluded}
For $k\in\mathbb{N}o$, let $N=p^k$, $3p^k$, $4p^k$, with $p$ prime; or $N=6p^k$ with $p\neq 3$ prime.
Then $\Phi_N$ cannot divide any Jones polynomial.
\begin{proof}
We check the values of cyclotomic polynomials in $1$, $\zeta_3$, $i$ and $\zeta_6$.
From \cite{BHM}, we have for $p$ prime:
\[\Phi_{p^k}(1)=p$ for $k>0$ (and $\Phi_1(1)=0)\]
\[|\Phi_{3p^k}(\zeta_3)|=\Phi_{4p^k}(i)=|\Phi_{6p^k}(\zeta_6)|=p\]
We see that none of these polynomials can divide a Jones polynomial $V_K$, since $V_K(1)=V_K(\zeta_3)=|V_K(i)|=1$ and
$|V_K(\zeta_6)|=\sqrt{3}^n$, except if $p=3$ in the case of $\Phi_{6p^k}$, which we excluded in our assumptions.
\end{proof}
\end{theorem}
We can also exclude easily some $\widetilde{\Phi}_{2k}$ as divisors of Jones polynomial:
\begin{proposition}
Let $k$ be odd. If $\widetilde{\Phi}_{2k}$ divides a Jones polynomial, then $k$ is not divisible by $3$.
\begin{proof}
If $3|k$, then $\Phi_6|\widetilde{\Phi}_{2k}$, hence $\widetilde{\Phi}_{2k}(\zeta_6)=0$, which is impossible for a divisor of a Jones polynomial.
\end{proof}
\end{proposition}
Notice, that all roots of unity appearing as zeros of Jones polynomials in Theorem~\ref{thm:cyclojones} are of the form $\zeta_{2k}$, with $k$ odd, $3\nmid k$. It is natural to ask what other roots of unity $\zeta_N$ can be zeros of Jones polynomials of knots.
Using Theorem~\ref{thm:excluded}, the smallest possible $N$ for such roots are $18,26,35,40,45,46,50,54,55,56,60$. Let us sum this up:
\begin{question}
Is there a knot with Jones polynomial having a zero in $\zeta_N$ such that:
$4|N$; $N$ is odd; $3|N$; or $N=2k$, $k$ odd, $3\nmid k$ but $N$ not coming from Theorem~\ref{thm:cyclojones}?
\end{question}
One may also ask the question, whether there are infinitely many primes such that $\Phi^{sym}_{2p}$ are Jones polynomials of some knots. A positive answer would follow, if there were infinitely many primes in the image of $f$ or $g$ from Theorem~\ref{thm:cyclojones} (two special cases of the Bunyakovsky conjecture).
Since a Mersenne prime $2^p-1$, with $p>2$, satisfies $2^p-1=2(2^\frac{p-1}{2})^2-1=g(2^\frac{p-1}{2})$, we get:
\begin{corollary}
Let $N=2^p-1$, $p>2$, be a Mersenne prime. Then $\Phi^{sym}_{2N}$ is the Jones polynomial of a knot.
\end{corollary}
| 1,332 | 16,165 |
en
|
train
|
0.175.3
|
\section{Arrow diagrams}\label{sec:arrows}
Arrow diagrams where introduced in~\cite{MD} for links in $F\times S^1$, where $F$ is an orientable surface. They were subsequently extended for links in Seifert manifolds (see~\cite{GM, MM1,MM2}). In~\cite{M1}, they were applied for links in $S^3$: it was shown there, that projections of links under the Hopf fibration from $S^3$ to $S^2$ can be encoded with arrow diagrams in a disk: such a diagram is like a usual diagram of a link, except that it is in a disk and there may be some arrows on it, outside crossings. Two arrow diagrams represent the same link if and only if one diagram can be transformed into the other with a series of six Reidemeister moves, see Figure~\ref{fig:reid}. For the $\Omega_\infty$ move in this figure, the boundary of the disk is drawn in thick. For simplicity we can also omit this boundary when picturing arrow diagrams (as we have done in Figure~\ref{fig:wnk}).
\begin{figure}
\caption{Reidemeister moves}
\label{fig:reid}
\end{figure}
A detailed interpretation of the arrow diagrams and Reidemeister moves can be found in~\cite{M1}. One can picture easily a link $L$ from its arrow diagram $D$ in the following way: pick a solid torus $T=C\times S^1$, $C$ a disk, consisting of some oriented fibers $p\times\ S^1$, $p\in C$, in the Hopf fibration of $S^3$. Let $S^1=I\cup I'$ consist of two intervals glued along their endpoints.
Then $T=B\cup B'$, where $B=C\times I$ and $B'=C\times I'$ are two balls. If there are no arrows in $D$, $L$ lies entirely in $B$. Otherwise it lies in $B$ except for some neighborhoods of the arrows where it goes through $B'$ along an oriented fiber and the orientation of the arrow agrees with the orientation of the fiber.
We turn now to the proof of Proposition~\ref{prop:11}. We want to show that the knots $W_{n,k}$ are $(1,1)$ knots. Recall from \cite{D} that a link $L$ admits a $(g,b)$ decomposition, if there is a genus $g$ Heegard splitting $(V_0,V_1)$ of $S^3$ such that $V_i$ intersects $L$ in $b$ trivial arcs, for $i\in\{0,1\}$. To show that $W_{n,k}$ is a $(1,1)$ knot, we need to show that it intersects each $T_i$ in a trivial arc, for a Heegard spliting of $S^3$ into two solid tori $T_i$, $i\in\{0,1\}$.
We say that an arrow diagram of a knot is {\it annulus monotonic}, if there is an annulus $A=S^1\times I$, containing the diagram and such that the curve of the diagram has exactly one minimum and one maximum w.r.t. $I$.
Applying $\Omega_\infty$ on the left kink of $W_{n,k}$ (see Figure~\ref{fig:wnk}), we obtain a diagram consiting of a spiral with some arrows on it. Such a diagram is clearly annulus monotonic, see Figure~\ref{fig:annulus}. Proposition~\ref{prop:11} now follows directly from the following:
\begin{lemma}
Suppose that a knot $K$ has an annulus monotonic arrow diagram $D$. Then $K$ is a $(1,1)$-knot.
\begin{proof}
Let $A=S^1\times I$ be an annulus containing $D$ and such that $D$ has exactly one minimum and one maximum w.r.t. $I$.
The closure of $S^3\setminus (A\times S^1)$ consists of two solid tori $T_0$ and $T_1$, chosen so that $T_i\cap (A\times S^1)=(S^1\times {i})\times S^1$, $i\in\{0,1\}$.
Cut $I$ into $I_0=[0,a]$ and $I_1=[a,1]$ for some $a\in (0,1)$, so that the intersection $(S^1\times I_0)\cap D$ is a small trivial arc. $A$ decomposes into two annuli $A_i=S^1\times I_i$, $i\in\{0,1\}$. Let $T'_i=T_i\cup (A_i\times S^1)$ be two solid tori, $i\in\{0,1\}$, so that $S^3=T'_0\cup T'_1$. Then $T'_0\cap K$ is clearly a trivial arc in $T'_0$. We claim that $T'_1\cap K$ is also a trivial arc in $T'_1$. Let $I_2=[b,1]$, $b>a$, be such that $(S^1\times I_2)\cap D$ is a small trivial arc. Let $A_2=S^1\times I_2$. Since $D$ is annulus monotonic, the pair $(A_1\times S^1,K\cap (A_1\times S^1))$ can be isotoped
to $(A_2\times S^1,K\cap (A_2\times S^1))$, by removing the tori $(S^1\times {c})\times S^1$ for $c$ from $a$ to $b$. Such isotopy clearly extends to $(T'_1,K\cap T'_1)$, so $K\cap T'_1$ is a trivial arc in $T'_1$. Thus $K$ is a $(1,1)$ knot.
\end{proof}
\end{lemma}
\begin{figure}
\caption{An annulus monotonic diagram, drawn inside the annulus}
\label{fig:annulus}
\end{figure}
We remark here, that for some $n$'s, hypothetical knots with Jones polynomials $\Phi^{sym}_n$ would only admit a $(g,b)$ decomposition with large $g+b$.
Indeed, in \cite{BHM} the values of cyclotomic polynomials in $\zeta_5$ are computed. It is shown there,
that $|\Phi_n(\zeta_5)|$ can be arbitrarily large for some $n$'s. For example it grows very fast with the number of primes in the decomposition of $n$, when $n$ is a product of an odd number of distinct primes congruent to $2$ or $3$ modulo $5$. For instance, for $n=2\;3\;7\;13\;17$, one checks that this module is approximately $2207$.
On the other hand, it follows from \cite{MSY} that, if a knot $K$ admits a $(g,b)$ decomposition, its Jones polynomial $V_K$ satisfies $|V_K(\zeta_5)|\le \alpha^g\beta^{b-1}$, where $\alpha>1$ and $\beta>1$ can be explicitely computed. Hence, a large module implies large $g+b$.
It was shown in~\cite{M1}, that the usual blackboard framing for links obtained from their diagrams extends to arrow diagrams and that such framing is invariant under all Reidemeister moves except $\Omega_1$. In particular, to compute the writhe of a framed link represented by an arrow diagram, one may eliminate all arrows without using $\Omega_1$, then sum the signs of all crossings in the arrowless diagram.
We present now a formula for the writhe of any arrow diagram of a knot. This formula holds also for oriented links.
Let $D$ be an oriented arrow diagram. Let $r$ be an arrow in $D$. The {\it sign} of $r$, denoted $\epsilon(r)$, is defined as follows: $\epsilon(r)=1$ (resp. $\epsilon(r)=-1$), if $r$ points in the same (resp. opposite) direction as the orientation of the diagram. We also say that $r$ is {\it positive} (resp. {\it negative}).
The winding number of $r$, denoted $ind(r)$, is by definition the winding number $ind_D(P)$, where $D$ is the diagram considered as an oriented curve and $P$ is a point close to $r$, to the right of $D$ according to the orientation of $D$.
For example, consider $W_{2,3}$ in Figure~\ref{fig:wnk}. Orient it so that the left kink is oriented clockwise. Then the $3$
arrows on the right are positive and the two arrows on the left are negative. Also the winding numbers of the arrows on the right are $0$, $1$ and $2$, wheras the $2$ arrows on the left have winding number $-1$.
Denote by $w(D)$ the writhe of the framed knot represented by the arrow diagram $D$. Denote by $\bar{w}(D)$ the writhe, when all arrows in $D$ are ignored (it is sum of the signs of crossings in $D$). We have the following formula for the writhe:
\begin{lemma}\label{lem:wr_formula}
Let $D$ be an oriented arrow diagram. Let $n=\displaystyle\sum_{r}\epsilon(r)$, the sum taken over all arrows of $D$.
Then:
\[w(D)=\bar{w}(D)+\displaystyle\sum_r 2\epsilon(r)ind(r)+n(n+1)\]
\begin{proof}
We remove with Reidemeister moves all arrows in $D$ keeping track of the signs of the crossings that appear. We do not use $\Omega_1$, thus the writhe is unchanged.
Consider an arrow $r$ in $D$. We push it next to the boundary of the diagram in such a way that the orientation of the arc next to the arrow agrees with the counterclockwise orientation of the boundary of the diagram (see Figure~\ref{fig:pn_arrows} (left),
where $3$ arrows have been pushed and the arcs are oriented as wished).
\begin{figure}
\caption{Two positive, one negative arrow (left); pushing an arrow through the arc next to it (right)}
\label{fig:pn_arrows}
\end{figure}
To achieve this, we use $\Omega_2$ and $\Omega_5$ moves repeatedly. When $r$ crosses an arc, two positive or two negative crossings appear. One checks that the total contribution, when $r$ is next to the boundary, is $2\epsilon(r)ind(r)$. Notice that when $r$ is next to the boundary, but the orientation of the arc is not the desired one, then $ind(r)=-1$ and $r$ has to be pushed once through a piece of arc next to it, see Figure~\ref{fig:pn_arrows} (right).
After all the arrows have been pushed, so they are as in Figure~\ref{fig:pn_arrows} (left), the sum of the signs of all crossings is $\bar{w}(D)+\displaystyle\sum_r 2\epsilon(r)ind(r)$.
Suppose now, that there are $a$ positive arrows and $b$ negative ones, so that $n=a-b$. Push every positive arrow through an arc next to it as in Figure~\ref{fig:pn_arrows} (right). This adds $2a$ positive crossings. Now any arrow $r$ can be eliminated with $\Omega_\infty$ followed by $\Omega_4$. We push the remaining arrows through the arc created by $\Omega_\infty$. One checks that if an arrow $r'$ is pushed through the arc coming from $r$, this adds two positive (resp. negative) crossings if $\epsilon(r)=\epsilon(r')$ (resp. $\epsilon(r)=-\epsilon(r')$). Then one repeats the process with the arrow $r'$ (eliminating it and pushing all other arrows through it). Hence, at the end any pair of arrows $r$ and $r'$ contributes $2\epsilon(r)\epsilon(r')$ to the writhe. The total contribution to the writhe of this second part is thus:
\[2a+a(a-1)+b(b-1)-2ab=(a-b)(a-b+1)=n(n+1)\]
Combined with the first part, this gives the required formula.
\end{proof}
\end{lemma}
Applying Lemma~\ref{lem:wr_formula} to $W_{n,k}$ we get:
\begin{lemma}\label{lem:writheWL}
Let $W_{n,k}$ stand for the diagram in Figure\ref{fig:wnk}, as well as for the framed
knot represented by this diagram. Then:
\[w(W_{n,k})=n^2+n+2k^2+k-2nk\]
\begin{proof}
Orient $W_{n,k}$ so that the $k$ arrows are positive. If $n>0$ then the $n$ arrows are negative. If $n<0$ then the $|n|$ arrows are positive. The $k$ arrows have winding numbers $0, 1, 2,\ldots,k-1$. The
$n$ arrows have all winding number $-1$. Also, $\bar{w}(W_{n,k})=k$. Hence:
\[w(W_{n,k})=k+2(-n)(-1)+2(1+2+\ldots+k-1)+(k-n)(k-n+1)\]
\[=k+2n+k(k-1)+(k-n)(k-n+1)=n^2+n+2k^2+k-2nk\]
\end{proof}
\end{lemma}
For an upper estimate of the number of crossings, $c(W_{n,k})$, we use Lemma~1 from~\cite{M1}. It states that, if a diagram $D$ has $k$ crossings and all its arrows are next to the boundary, with $a$ of them removable (i.e. one can remove them with $\Omega_\infty$ followed by $\Omega_4$) and $b>0$ of them non removable, then $c(K)\le k+b-1+(a+b)(a+b-1)$. We get:
\begin{lemma}\label{lem:upperc}
For $n\ge 0$, $k\ge 0$ and $k+n>0$, one has:
\[c(W_{n,k})\le k^2+(n+k)^2-1\]
\begin{proof}
Starting with the diagram of $W_{n,k}$ shown in Figure~\ref{fig:wnk}, we push $k-1$ arrows so that they are next to the boundary. We get a diagram with $k+2(1+2+\ldots+k-1)=k+k(k-1)=k^2$ crossings. Then, we can apply Lemma~1 from~\cite{M1}. Since $n\ge 0$, all arrows will be non removable. Since $n+k>0$, there is at least one non removable arrow. Thus, $c(W_{n,k})\le k^2+(n+k)-1+(n+k)(n+k-1)=k^2+(n+k)^2-1$.
\end{proof}
| 3,720 | 16,165 |
en
|
train
|
0.175.4
|
\end{lemma}
Applying Lemma~\ref{lem:wr_formula} to $W_{n,k}$ we get:
\begin{lemma}\label{lem:writheWL}
Let $W_{n,k}$ stand for the diagram in Figure\ref{fig:wnk}, as well as for the framed
knot represented by this diagram. Then:
\[w(W_{n,k})=n^2+n+2k^2+k-2nk\]
\begin{proof}
Orient $W_{n,k}$ so that the $k$ arrows are positive. If $n>0$ then the $n$ arrows are negative. If $n<0$ then the $|n|$ arrows are positive. The $k$ arrows have winding numbers $0, 1, 2,\ldots,k-1$. The
$n$ arrows have all winding number $-1$. Also, $\bar{w}(W_{n,k})=k$. Hence:
\[w(W_{n,k})=k+2(-n)(-1)+2(1+2+\ldots+k-1)+(k-n)(k-n+1)\]
\[=k+2n+k(k-1)+(k-n)(k-n+1)=n^2+n+2k^2+k-2nk\]
\end{proof}
\end{lemma}
For an upper estimate of the number of crossings, $c(W_{n,k})$, we use Lemma~1 from~\cite{M1}. It states that, if a diagram $D$ has $k$ crossings and all its arrows are next to the boundary, with $a$ of them removable (i.e. one can remove them with $\Omega_\infty$ followed by $\Omega_4$) and $b>0$ of them non removable, then $c(K)\le k+b-1+(a+b)(a+b-1)$. We get:
\begin{lemma}\label{lem:upperc}
For $n\ge 0$, $k\ge 0$ and $k+n>0$, one has:
\[c(W_{n,k})\le k^2+(n+k)^2-1\]
\begin{proof}
Starting with the diagram of $W_{n,k}$ shown in Figure~\ref{fig:wnk}, we push $k-1$ arrows so that they are next to the boundary. We get a diagram with $k+2(1+2+\ldots+k-1)=k+k(k-1)=k^2$ crossings. Then, we can apply Lemma~1 from~\cite{M1}. Since $n\ge 0$, all arrows will be non removable. Since $n+k>0$, there is at least one non removable arrow. Thus, $c(W_{n,k})\le k^2+(n+k)-1+(n+k)(n+k-1)=k^2+(n+k)^2-1$.
\end{proof}
\end{lemma}
We end this section with a visualization of any knot $W_{n,k}$. Such knot is obtained by a small modification from a pair of torus knots lying on the boundary of a thickened Hopf link. It was shown in~\cite{M1} how to get some simple arrow diagrams of torus knots: one checks that $W_{n,0}$ is the torus knot $T(n,n+1)$ and $W_{0,k}$ is the torus knot $T(k,2k+1)$. Consider the diagram of $W_{n,k}$ in Figure~\ref{fig:wnk}. Let $W^s_{n,k}$ be the diagram of a $2$-component link, obtained from $W_{n,k}$ by smoothing vertically the crossing next to the $n$ arrows. The components of $W^s_{n,k}$ are torus knots $T(n,n+1)$ and $T(k,2k+1)$.
Let $D$ and $D'$ be two disjoint disks, such that $D$ contains the $n$ arrows, $D'$ contains the $k$ arrows and $W^s_{n,k}$ is contained in $D\cup D'$. Let $T$, resp. $T'$, be two solid tori consisting of fibers intersecting $D$, resp. $D'$, in the Hopf fibration of $S^3$. Then $T$ and $T'$ form a thickened Hopf link. The torus $(n,n+1)$ component of $W^s_{n,k}$ can be pushed onto $\partial T$ and the torus $(k,2k+1)$ component can be pushed onto $\partial T'$. Then $W_{n,k}$ is obtained from such two linked torus knots by reverting the smoothing back to the crossing.
| 1,145 | 16,165 |
en
|
train
|
0.175.5
|
\section{Jones polynomials of the knots $W_{n,k}$}\label{sec:proofs}
Let $G_n$, $G'_n$ and $G'_{a,b}$, $n,a,b\in\mathbb{Z}$, be the arrow diagrams shown in Figure~\ref{fig:gn}. In the box is an arrow tangle $G$ (a tangle with, possibly, some arrows on it). Let $g_n$, $g'_n$ and $g'_{a,b}$ be the Kauffman brackets of, respectively, $G_n$, $G'_n$ and $G'_{a,b}$.
We want to express $g'_n$ with some $g_k$'s. In order to do it, we will use $g'_{a,b}$'s.
\begin{figure}
\caption{$G_n$, $G'_n$ and $G'_{a,b}
\label{fig:gn}
\end{figure}
It is useful to define for $n\ge0$ the sum:
\[S_n=A^{n}g_{-n}+A^{n-2}g_{-n+2}+\ldots+A^{-n+2}g_{n-2}+A^{-n}g_n=\displaystyle\sum_{i=0}^n A^{n-2i}g_{-n+2i}\]
Extend $S_n$ for negative $n$, by defining $S_{-1}=0$ and, for $n<-1$:
\[S_n=-S_{|n|-2}\]
\begin{lemma}\label{lem:kink}
For $n\in\mathbb{Z}$:
\[g'_n=(A^{-1}-A^3)A^nS_n-A^{2n-1}g_{-n}\]
\begin{proof}
One checks easily that the formula holds for $n=0$ and $n=-1$.
From the defining relations of the Kauffman bracket, we get:
\[
\raisebox{-7pt}{\includegraphics{Lplus}}=A^2 \raisebox{-7pt}{\includegraphics{Lminus}}
+(A^{-1}-A^{3}) \raisebox{-7pt}{\includegraphics{Lzero}}\]
Using this relation and $\Omega_5$ and $\Omega_4$ moves, we get:
\[g'_{a,b}=A^2g'_{a-1,b-1}+(A^{-1}-A^3)g_{a+b}\label{eq:ttt}\tag{*}\]
Suppose that $n\ge 1$. Iterating equation~(\ref{eq:ttt}) until $g'_{0,-n}=-A^3g_{-n}$, we get:
\begin{align*}
g'_{n,0}=&\quad A^2g'_{n-1,-1}+(A^{-1}-A^3)g_{n}\\
=&\quad A^4g'_{n-2,-2}+A^2(A^{-1}-A^3)g_{n-2}+(A^{-1}-A^3)g_{n}\\
=&\quad \ldots\\
= &\quad -A^3A^{2n}g_{-n}+(A^{-1}-A^3)\left(g_n+A^2g_{n-2}+\ldots+A^{2n-2}g_{-n+2}\right)\\
=&\quad -A^{2n-1}g_{-n}+(A^{-1}-A^3)\left(g_n+A^2g_{n-2}+\ldots+A^{2n}g_{-n}\right)\\
=&\quad (A^{-1}-A^3) A^n S_n -A^{2n-1}g_{-n}
\end{align*}
Suppose now that $n\le -2$. Rewriting equation~(\ref{eq:ttt}) and replacting $a$ by $a+1$ and $b$ by $b+1$ one gets:
\[g'_{a,b}=A^{-2}g'_{a+1,b+1}-A^{-2}(A^{-1}-A^3)g_{a+b+2}\]
Iterating until $g'_{n+|n|,|n|}=g'_{0,-n}=-A^3g_{-n}$, we get:
\begin{align*}
g'_{n,0}=&\quad A^{-2}g'_{n+1,1}-A^{-2}(A^{-1}-A^3)g_{n+2}\\
=&\quad A^{-4}g'_{n+2,2}-A^{-4}(A^{-1}-A^3)g_{n+4}-A^{-2}(A^{-1}-A^3)g_{n+2}\\
=&\quad \ldots\\
= &\quad -A^3A^{2n}g_{-n}-A^{-2}(A^{-1}-A^3)\left(g_{n+2}+A^{-2}g_{n+4}+\ldots+A^{2n+2}g_{-n}\right)\\
= &\quad -A^{2n-1}g_{-n}-A^{-2}(A^{-1}-A^3)\left(g_{n+2}+A^{-2}g_{n+4}+\ldots+A^{2n+4}g_{-n-2}\right)\\
= &\quad -A^{2n-1}g_{-n}-A^{-2}(A^{-1}-A^3)A^{n+2}\left(A^{-n-2}g_{n+2}+\ldots+A^{n+2}g_{-n-2}\right)\\
= &\quad -A^{2n-1}g_{-n}-(A^{-1}-A^3)A^{n}S_{|n|-2}\\
= &\quad (A^{-1}-A^3)A^{n}S_{n}-A^{2n-1}g_{-n}
\end{align*}
\end{proof}
| 1,402 | 16,165 |
en
|
train
|
0.175.6
|
\end{lemma}
We now prove Theorem~\ref{thm:jonesWnk} by induction on $k$. $W_{n,0}$ is an oval with $n\in\mathbb{Z}$ arrows on it. This is the torus knot $T(n,n+1)$ if $n\ge 0$ and, for $n<0$, $W_{n,0}=W_{-1-n,0}$ (use $\Omega_\infty$ and $\Omega_4$ moves). One checks that for $k=0$ the formula in Theorem~\ref{thm:jonesWnk} is the correct formula for such torus knots (see also~\cite{M1}).
We restate Theorem~\ref{thm:jonesWnk} in terms of the Kauffman bracket using the formula $V_K(t)=(-A)^{-3w(K)}<K>$,
where $w(K)$ is the writhe of $K$ and $t=A^{-4}$.
From Lemma~\ref{lem:writheWL}, we have $w(W_{n,k})=n^2+n+2k^2+k-2nk$, hence $(-1)^{3w(W_{n,k})}=(-1)^k$ and:
\[<W_{n,k}>=(-1)^kA^{3(n+1)n+ 3k(2k+1-2n)}V_{W_{n,k}}\]
One checks, that Theorem~\ref{thm:jonesWnk} can be restated as:
\begin{proposition}
\[<W_{n,k}>(A^{-8}-1)=(-1)^kA^{n^2-2kn+2k^2+n-k-8}\]
\[\left((1-A^4)A^{4kn+4n+8k+4}-A^{4n-4k+4}+A^{4k+4}-A^{8k-4n+4}+1\right)\]
\begin{proof}
For $k=0$ the formula is correct, since it is correct for the Jones polynomial and we just restate it with the Kauffman bracket using the writhe.
Let $k\ge 0$. Assume that the formula holds for $<W_{n,k}>$, $n\in\mathbb{Z}$.
We use Lemma~\ref{lem:kink}, with $W_{n,k+1}=G'_n$. One has to identify $G_a$ (in that lemma), for any $a\in\mathbb{Z}$: it has a diagram shown in Figure~\ref{fig:WGn} (for $k=2$ as an example). Using a single $\Omega_\infty$ move on the strand with the $a+1$ arrows, one gets $W_{-a-2,k}$ (one extra arrow comes from the move). Since this move does not change the writhe, $<G_a>=<W_{-a-2,k}>$.
\begin{figure}
\caption{Identifying $G_a$}
\label{fig:WGn}
\end{figure}
Recall the notations used in Lemma~\ref{lem:kink}:
\[g'_n=<G'_n>,\quad g_a=<G_a>,\quad S_n=\displaystyle\sum_{i=0}^n A^{n-2i}g_{-n+2i}\]
Also, by definition: $S_{-1}=0$ and $S_n=-S_{|n|-2}$ for $n<-1$.
Let:
\begin{align*}
S'_{n}=&(-1)^kA^{n^2-2kn-6n+2k^2-k-10}\left(-A^{4kn+12n+12k+16}\right.\\
&\left.+A^{4kn+8n+4k}+A^{4n+8k+8}-A^{8n}\right)
\end{align*}
One checks that $S'_{-1}=0$ and $S'_n=-S'_{-n-2}$ for any $n\in\mathbb{Z}$.
We claim that for any $n\in\mathbb{Z}$:
\[S_{n}(A^{-8}-1)=S'_{n} \tag{**}\label{ssp}\]
Because of the skew-symmetry of both $S_n$ and $S'_n$ around $-1$, it is sufficient to prove~(\ref{ssp}) for $n\ge -1$.
It is true for $n=-1$. Now $S_{0}=g_0=<W_{-2,k}>$. By induction on $k$:
\[<W_{-2,k}>(A^{-8}-1)=(-1)^k A^{2k^2-k-10}(-A^{12k+16}+A^{8k+8}+A^{4k}-1)=S'_{0}\]
One has obviously:
\begin{align*}
S_{n+2}&=S_n+A^{n+2}g_{-n-2}+A^{-n-2}g_{n+2}\\
&=S_n+A^{n+2}<W_{n,k}>+A^{-n-2}<W_{-n-4,k}>
\end{align*}
Thus, to prove~(\ref{ssp}), we need to show
that:
\[S'_{n+2}=S'_n+(A^{-8}-1)\left(A^{n+2}<W_{n,k}>+A^{-n-2}<W_{-n-4,k}>\right)\]
One checks:
\[S'_{n+2}=(-1)^kA^{n^2-2kn+2k^2-2n-5k-18}\left(-A^{4kn+12n+20k+40}\right.\]
\[\left.+A^{4kn+8n+12k+16}-A^{8n+16}+A^{4n+8k+16}\right)\]
By induction on $k$:
\begin{align*}
&S'_n+(A^{-8}-1)\left(A^{n+2}<W_{n,k}>+A^{-n-2}<W_{-n-4,k}>\right)=\\
&(-1)^kA^{n^2-2kn-6n+2k^2-k-10}\left(-A^{4kn+12n+12k+16}\right.\\
&\left.+A^{4kn+8n+4k}+A^{4n+8k+8}-A^{8n}\right)+(-1)^kA^{n^2-2kn+2k^2+2n-k-6}\\
&\left((1-A^4)A^{4kn+4n+8k+4}-A^{4n-4k+4}+A^{4k+4}-A^{8k-4n+4}+1\right)\\
&+(-1)^k A^{n^2+2kn+2k^2+6n+7k+2}\left((1-A^4)A^{-4kn-4n-8k-12}-A^{-4n-4k-12}\right.\\
&\left.+A^{4k+4}-A^{8k+4n+20}+1\right)=(-1)^kA^{n^2-2kn+2k^2-2n-5k-18}\\
&\left(-A^{4kn+8n+16k+24}+A^{4kn+4n+8k+8}+A^{12k+16}-A^{4n+4k+8}\right.\\
&+(1-A^4)A^{4kn+8n+12k+16}-A^{8n+16}+A^{4n+8k+16}-A^{12k+16}+A^{4n+4k+12}\\
&+(1-A^4)A^{4n+4k+8}-A^{4kn+4n+8k+8}+A^{4kn+8n+16k+24}\\
&\left.-A^{4kn+12n+20k+40}+A^{4kn+8n+12k+20}\right)=S'_{n+2}
\end{align*}
Since $g_{-n}=<W_{n-2,k}>$, from Lemma~\ref{lem:kink} we get:
\[<W_{n,k+1}>=(A^{-1}-A^3)A^{n}S_{n}-A^{2n-1}<W_{n-2,k}>\]
Hence:
\begin{align*}
&<W_{n,k+1}>(A^{-8}-1)=(A^{-1}-A^3)A^{n}S'_{n}-A^{2n-1}<W_{n-2,k}>(A^{-8}-1)\\
&=(-1)^k(A^{-1}-A^3)A^{n^2-2kn-5n+2k^2-k-10}\left(-A^{4kn+12n+12k+16}+A^{4kn+8n+4k}\right.\\
&\left.+A^{4n+8k+8}-A^{8n}\right)-(-1)^kA^{n^2-2kn+2k^2-n+3k-7}\left((1-A^4)A^{4kn+4n-4}\right.\\
&\left.-A^{4n-4k-4}+A^{4k+4}-A^{8k-4n+12}+1\right)=(-1)^kA^{n^2-2kn+2k^2-n+3k-7}\\
&\left(-(1-A^4)A^{4kn+8n+8k+12}+(1-A^4)A^{4kn+4n-4}+(1-A^4)A^{4k+4}\right.\\
&-(1-A^4)A^{4n-4k-4}-(1-A^4)A^{4kn+4n-4}+A^{4n-4k-4}-A^{4k+4}\\
&\left.+A^{8k-4n+12}-1\right)=(-1)^{k+1}A^{n^2-2kn+2k^2-n+3k-7}\\
&\left((1-A^4)A^{4kn+8n+8k+12}-A^{4n-4k}+A^{4k+8}-A^{8k-4n+12}+1\right)\\
&=(-1)^{k+1}A^{n^2-2(k+1)n+2(k+1)^2+n-(k+1)-8}\left((1-A^4)A^{4(k+1)n+4n+8(k+1)+4}\right.\\
&\left.-A^{4n-4(k+1)+4}+A^{4(k+1)+4}-A^{8(k+1)-4n+4}+1\right)
\end{align*}
Thus the formula holds for $<W_{n,k+1}>$ and we are done.
\end{proof}
\end{proposition}
\end{document}
| 2,710 | 16,165 |
en
|
train
|
0.176.0
|
\begin{document}
\title{Recursive formulation of the
multiconfigurational time-dependent Hartree method
for fermions, bosons and mixtures thereof
in terms of one-body density operators}
\author{Ofir E. Alon$^{1\ast}$\footnote[0]{$^{\ast}$ [email protected]},
Alexej I. Streltsov$^{2\dag}$\footnote[0]{$^{\dag}$ [email protected]},
Kaspar Sakmann$^{2\ddag}$\footnote[0]{$^{\ddag}$ [email protected]},\break
Axel U. J. Lode$^{2\S}$\footnote[0]{$^{\S}$ [email protected]},
Julian Grond$^{2\P}$\footnote[0]{$^{\P}$ [email protected]},
and Lorenz S. Cederbaum$^{2\parallel}$\footnote[0]{$^{\parallel}$ [email protected]}}
\affiliation{$^{1}$ Department of Physics, University of Haifa at Oranim, Tivon 36006, Israel.}
\affiliation{$^{2}$ Theoretische Chemie, Physikalisch-Chemisches Institut, Universit\"at Heidelberg,\\
Im Neuenheimer Feld 229, D-69120 Heidelberg, Germany.}
\begin{abstract}
The multiconfigurational time-dependent Hartree method (MCTDH)
[H.-D. Meyer, U. Manthe, and L. S. Cederbaum, Chem. Phys. Lett. {\bf 165},
73 (1990); U. Manthe, H.-D. Meyer, and L. S. Cederbaum, J. Chem. Phys. {\bf 97}, 3199
(1992)] is celebrating nowadays entering its third decade of
tackling numerically-exactly a broad range of correlated
multi-dimensional non-equilibrium quantum dynamical systems.
Taking in recent years particles' statistics explicitly
into account,
within the MCTDH for fermions (MCTDHF) and for bosons (MCTDHB),
has opened up further opportunities to treat larger systems
of interacting identical particles,
primarily in laser-atom and cold-atom physics.
With the increase of experimental capabilities to
simultaneously trap mixtures of two, three, and possibly even
multiple kinds of
interacting composite identical particles together,
we set up the stage in the present work and
specify the MCTDH method for such cases.
Explicitly,
the MCTDH method for systems with three kinds of identical
particles interacting via all combinations of two- and three-body forces is presented,
and the resulting equations-of-motion are briefly discussed.
All four possible mixtures (Fermi-Fermi-Fermi,
Bose-Fermi-Fermi, Bose-Bose-Fermi and Bose-Bose-Bose)
are presented in a unified manner.
Particular attention is paid to represent the
coefficients' part of the equations-of-motion
in a compact recursive form
in terms of one-body density operators only.
The recursion utilizes the recently proposed
Combinadic-based mapping for fermionic and bosonic operators in Fock space
[A. I. Streltsov, O. E. Alon, and L. S. Cederbaum, Phys. Rev. A {\bf 81}, 022124 (2010)]
and successfully applied and implemented within MCTDHB.
Our work sheds new light on the
representation of the coefficients'
part in MCTDHF and MCTDHB
without resorting to the
matrix elements of the many-body Hamiltonian
with respect to the time-dependent configurations.
It suggests a recipe for
efficient implementation of
the schemes derived here for mixtures
which is suitable for parallelization.
\end{abstract}
\pacs{31.15.xv, 67.60.-g, 05.30.Fk, 05.30.Jp, 03.65.-w}
\maketitle
\section{Introduction}\label{SEC1}
Quantum non-equilibrium dynamics is important to many branches of physics and chemistry
\cite{Book_dynamics1,Book_dynamics2,Nuclear_book,Book_dynamics3,Pit_Stri_book,Book_dynamics4}
and often requires the solution of the time-dependent
many-particle Schr\"odinger equation.
A particular efficient method
to solve the time-dependent
many-particle Schr\"odinger equation
is the multiconfigurational time-dependent Hartree
(MCTDH) algorithm and approach \cite{cpl,jcp,review,book}.
MCTDH,
which is considered at present the most efficient wave-packet
propagation tool,
has amply been employed for multi-dimensional
dynamical systems of distinguishable degrees-of-freedom,
typically molecular vibrations, see, e.g.,
Refs.~\cite{JCP_24a,JCP_24b,Manthe_review,Lenz_CI,relaxation2,vib_new1,vib_new2,irene}.
We mention that recent developments on multi-layer formulation of MCTDH
have opened up further possibilities to treat
larger systems of distinguishable
degrees-of-freedom \cite{ML_1,ML_2,ML_3}.
MCTDH has recently been applied with much success to various
systems with a few identical particles
in the field of cold-atom physics,
see, e.g., Refs.~\cite{ZO_st1,ZO_st2,ZO_dy2,Sascha_mix,axel,Sascha_dip}.
In recent years,
taking the quantum statistics between identical particles {\it a priori}
into account,
the MCTDH method has been specified
for systems of identical particles,
which opened up interesting possibilities to treat larger systems.
First MCTDHF -- the fermionic version of MCTDH --
was developed by three independent groups \cite{MCTDHF1,MCTDHF2,MCTDHF3}.
Shortly after,
MCTDHB -- the bosonic version of MCTDH --
was developed in \cite{MCTDHB0,MCTDHB1}.
For applications of MCTDHF
to laser-matter interaction and other few-fermion problems see, e.g.,
Refs.~\cite{applF1,applF2,applF3,applF4,applF5,applF6,applF7,applF7m5,applF8,applF9},
where the last work combines optimal control theory with MCTDHF.
For applications of MCTDHB
to Bose-Einstein condensates see, e.g.,
Refs.~\cite{applB1,applB2,applB3,applB4,applB5},
where the last two works combine optimal
control theory with MCTDHB.
Since the seminal paper of L\"owdin \cite{Lowdin},
reduced density matrices and particularly reduced two-body density
matrices have been a lively field of research, see, e.g.,
Refs.~\cite{Slava,MAZZ1,MAZZ2,MAZZ3,MAZZ4,MAZZ5,MAZZ6}.
Reduced one-body density matrices
are an inherent
part of the MCTDH \cite{cpl,jcp,review,book}.
In the present context,
reduced one- and two-body density matrices
were first used to derive the
static self-consistent theory for bosons,
the multiconfigurational Hartree for bosons (MCHB) in \cite{MCHB}.
Thereafter,
MCTDHB and MCTDHF were formulated in a unified manner
by employing reduced one-, two- \cite{unified} and three-body \cite{book}
density matrices.
Further specification of MCTDH to mixtures of
two kinds of identical particles
(MCTDH-FF for Fermi-Fermi mixtures;
MCTDH-BF for Bose-Fermi mixtures;
and
MCTDH-BB for Bose-Bose mixtures)
was put forward in \cite{MCTDHX}.
All the above developments made use of the
fact that the mean-field operators in
the traditional MCTDH can be factorized to
products of reduced density matrices
times one-body operators.
Finally,
we mention that
MCTDH has been extended to systems with particle conversion
(termed MCTDH-{\it conversion}),
where particles of one kind can
convert to another kind \cite{conversion}.
A breakthrough in the formulation \cite{mapping,3well} and implementation \cite{package}
of MCTDHB has stemmed from a general Combinadic-based mapping
of bosonic (and fermionic) operators in Fock space.
In this formulation,
the direct
calculation of the matrix representation of
the Hamiltonian in the (huge) multiconfigurational
space is abandoned,
and is replaced by the action of one-body and two-body
density operators on the multiconfigurational wave-function.
The operation of the various density operators can be
performed in parallel \cite{package},
which further accelerates
the performance of the algorithm.
This brings us closer to the topic and
contents of the present work.
Two-body interaction is the most basic interaction in an interacting (quantum) system.
When the particles comprising the quantum system have internal structure,
higher-order interactions (forces) may come into play.
For instance, in nuclear physics it has long been accepted that three-body interactions
are necessary to fully understand the structure of nuclei, see, e.g. \cite{nuc3b,nuc3a}.
Much more recently, and in the context of another field,
the proposition to utilize cold polar molecules to engineer
(condensed-matter) systems with three-body interactions
has been made \cite{cold3}.
So, the motivation to study the non-equilibrium dynamics
of systems with up to three-body forces is clear.
But why study the quantum dynamics of a mixture of three kinds of identical particles?
Are such systems present in nature?
In the cold-atom world,
the plurality of atoms is one of the most important ingredients
experimentalists (and theorists)
have at their disposal.
For instance,
the element Yb has
seven stable isotopes
(5 bosonic and 2 fermionic isotopes).
Yb has been envisaged to play an instrumental
role in realizing various interesting ultra-cold mixtures
(see Ref.~\cite{Yb} for a realization of a
Bose-Einstein condensate with $^{170}$Yb atoms
and the discussion therein).
More recently,
a quantum degenerate
Fermi-Fermi mixture of $^6$Li-$^{40}$K atoms
coexisting with a Bose-Einstein Condensate
of $^{87}$Rb atoms were realized \cite{TH_2008},
as well as a triply quantum-degenerate mixture of
bosonic $^{41}$K atoms
and two fermionic $^{40}$K and $^6$Li atoms \cite{MZ_2011}.
Hence,
mixtures of three kinds of identical particles
have been created in the lab.
All the above dictate the purposes and contents of the present work.
The MCTDH method for
systems with three kinds of identical
particles interacting via all combinations of two- and three-body forces is derived,
and the resulting equations-of-motion are briefly discussed.
All four possible mixtures (Fermi-Fermi-Fermi,
Bose-Fermi-Fermi, Bose-Bose-Fermi and Bose-Bose-Bose)
are presented in a unified manner.
Particular attention is paid to representing the
coefficients' part of the equations-of-motion
in a compact recursive form
in terms of one-body density operators only.
The recursion utilizes the recently proposed
Combinadic-based mapping \cite{mapping}
which has already been successfully applied
and implemented within MCTDHB \cite{package}.
Our work sheds new light on the
representation of the coefficients'
part in MCTDHF and MCTDHB
without resorting to the
matrix elements of the many-body Hamiltonian
with respect to the time-dependent configurations,
and suggests a recipe for
efficient implementation of
the theory derived here for mixtures
which is suitable for parallelization.
The structure of the paper is as follows.
In Sec.~\ref{SEC2} we present the building bricks
of the theory by reconstructing MCTDHF and MCTDHB.
In Sec.~\ref{SEC3} we assemble from
these ingredients the multiconfigurational
time-dependent Hartree method for mixtures
of three kinds
of identical particles
interacting via up to three-body forces.
A brief summary and outlook are given in Sec.~\ref{SEC4}.
Finally,
we collect in Appendixes \ref{appendix_A}-\ref{appendix_C}
for completeness and ease
of presentation of the main text
various quantities appearing and needed
in the derivation.
The paper and the Appendixes are detailed and
intended also to serve as a guide for
the implementation of the equations-of-motion.
The reconstruction of
MCTDHF and MCTDHB is given in sufficient detail.
This allows us to defer to the Appendixes
much of the lengthly formulas
used later on for the mixtures.
| 3,471 | 51,553 |
en
|
train
|
0.176.1
|
\section{Building bricks: Reconstructing MCTDHF and MCTDHB}\label{SEC2}
| 28 | 51,553 |
en
|
train
|
0.176.2
|
\subsection{From basic ingredients to mapping}\label{SEC2.1}
Our starting point is the many-body Hamiltonian of
$N_A$ interacting identical particles of type $A$:
\begin{equation}n\label{ham}
& & \hat H^{(A)} = \hat h^{(A)} + \hat W^{(A)} + \hat U^{(A)} =
\int d{\bf x}\bigg\{ \hat{\mathbf \Psi}^\dag_A(\x) \hat h^{(A)}(\x) \hat{\mathbf \Psi}_A(\x) + \nonumber \\
&+& \frac{1}{2} \int d\x' \bigg[ \hat{\mathbf \Psi}^\dag_A(\x) \hat{\mathbf \Psi}^\dag_A(\x') \hat W^{(A)}(\x,\x')
\hat{\mathbf \Psi}_A(\x') \hat{\mathbf \Psi}_A(\x) + \\
&+& \frac{1}{3} \int d\x'' \hat{\mathbf \Psi}^\dag_A(\x) \hat{\mathbf \Psi}^\dag_A(\x') \hat{\mathbf \Psi}^\dag_A(\x'')
\hat W^{(A)}(\x,\x',\x'') \hat{\mathbf \Psi}_A(\x'') \hat{\mathbf \Psi}_A(\x')
\hat{\mathbf \Psi}_A(\x) \bigg] \bigg\}, \nonumber \
\end{equation}n
where $\hat h^{(A)}$ is the one-body part,
$\hat W^{(A)}$ the two-body part
and $\hat U^{(A)}$ the three-body part.
The operators $\hat h^{(A)}$, $\hat W^{(A)}$ and $\hat U^{(A)}$
can generally be time-dependent.
We use the time-independent field operator expanded by time-dependent orbitals:
\begin{equation}\label{field}
\hat{\mathbf \Psi}_A(\x) = \sum_k \hat a_k(t)\phi_k(\x,t),
\end{equation}
where the annihilation and creation operators obey
the usual fermionic/bosonic anti/commutation relations,
$\hat a_q(t) \hat a_k^\dag(t) \pm \hat a_k^\dag(t) \hat a_q(t) = \delta_{kq}$.
Correspondingly,
the field operator obeys the anti/commutation relations,
$\hat{\mathbf \Psi}_A(\x) \left\{\hat{\mathbf \Psi}_A(\x')\right\}^\dag \pm
\left\{\hat{\mathbf \Psi}_A(\x')\right\}^\dag \hat{\mathbf \Psi}_A(\x) = \delta(\x-\x')$.
Here and hereafter the upper sign refers
to fermions and the lower to bosons.
The coordinate ${\bf x}\equiv \{\r, \sigma\}$
stands for spatial degrees of freedom and spin,
if present.
Thus, the shorthand notations
$\delta(\x-\x')=\delta(\r-\r')\delta_{\sigma,\sigma'}$
and $\int d{\bf x}\equiv \int d{\bf r}\sum_\sigma$
are implied throughout this work.
Furthermore,
we do not denote explicitly
the dependence of quantities on time when unambiguous.
Plugging the expansion (\ref{field}) into the many-body
Hamiltonian (\ref{ham}) one gets:
\begin{equation}\label{ham2nd}
\hat H^{(A)} =
\sum_{k,q} h^{(A)}_{kq} \hat \rho^{(A)}_{kq}
+ \frac{1}{2} \sum_{k,s,q,l} W^{(A)}_{ksql} \hat \rho^{(A)}_{kslq}
+ \frac{1}{6}\sum_{k,s,p,r,l,q} U^{(A)}_{kspqlr} \hat \rho^{(A)}_{ksprlq},
\end{equation}
where the matrix elements with respect to the orbitals $\left\{\phi_k(\x,t)\right\}$ are given by:
\begin{equation}n\label{matrix_elements}
h^{(A)}_{kq} &=& \int \phi_k^\ast(\x,t) \hat h^{(A)}(\x) \phi_q(\x,t) d\x, \nonumber \\
W^{(A)}_{ksql} &=& \int \!\! \int \phi_k^\ast(\x,t) \phi_s^\ast(\x',t) \hat W^{(A)}(\x,\x')
\phi_q(\x,t) \phi_l(\x',t) d{\bf x}d\x', \nonumber \\
U^{(A)}_{kspqlr} &=& \int \!\! \int \!\! \int \phi_k^\ast(\x,t) \phi_s^\ast(\x',t)
\phi_p^\ast(\x'',t) \hat U^{(A)}(\x,\x',\x'') \times \nonumber \\
&\times& \phi_q(\x,t) \phi_l(\x',t) \phi_r(\x'',t) d{\bf x}d\x' d\x''. \
\end{equation}n
In (\ref{ham2nd}),
we introduce the one-body density operators
\begin{equation}\label{density_oper_1B}
\hat \rho^{(A)}_{kq} = \hat a_k^\dag \hat a_q,
\end{equation}
as well as the two- and three-body density operators
\begin{equation}n\label{density_oper_2B_3B}
& & \hat \rho^{(A)}_{kslq} = \hat a_k^\dag \hat a_s^\dag \hat a_l \hat a_q =
\pm \hat \rho^{(A)}_{kq} \delta_{sl} \mp \hat \rho^{(A)}_{kl} \hat \rho^{(A)}_{sq}, \nonumber \\
& & \hat \rho^{(A)}_{ksprlq} = \hat a_k^\dag \hat a_s^\dag \hat a_p^\dag \hat a_r \hat a_l \hat a_q =
\pm \hat \rho^{(A)}_{kslq} \delta_{pr} - \hat \rho^{(A)}_{ksrq}
\delta_{pl} + \hat \rho^{(A)}_{ksrl} \hat \rho^{(A)}_{pq}.
\end{equation}n
The reason for
this choice of notation with density operators
in (\ref{ham2nd}) will
become clear below.
We see that the two-body density operators $\left\{\hat \rho^{(A)}_{kslq}\right\}$
can be written as products
of the one-body density operators,
and that the three-body density operators $\left\{\hat \rho^{(A)}_{ksprlq}\right\}$
can be written
as products of the two- and one-body density operators,
and so on, recursively.
Hence,
the one-body density
operators $\left\{\hat \rho^{(A)}_{kq}\right\}$ in (\ref{density_oper_1B})
are our basic building bricks.
The many-body wave-function is expanded by time-dependent
configurations (determinants $\left|\i;t\right>$ for fermions, permanents $\left|\n;t\right>$ for bosons)
assembled by distributing the $N_A$ particles over
the $M_A$ time-dependent orbitals introduced in
the expansion (\ref{field}).
For fermions we write \cite{mapping}:
\begin{equation}\label{MCTDHF_ansatz}
\left|\Psi^{(A)}(t)\right> =
\sum_{\{\i\}} C_{\i}(t) \left|\i;t\right> \equiv
\sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C_{J_A}(t) \left|J_A;t\right>,
\end{equation}
where the address $J_A$ is defined as follows:
\begin{equation}\label{I_numbering}
J_A \equiv J_A(\i)= 1 + \sum_{j=1}^{M_A-N_A}\binom{M_A-i_j}{M_A-N_A+1-j},
\end{equation}
whereas for bosons we write \cite{mapping}:
\begin{equation}\label{MCTDHB_ansatz}
\left|\Psi^{(A)}(t)\right> =
\sum_{\{\n\}} C_{\n}(t) \left|\n;t\right> \equiv
\sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C_{J_A}(t) \left|J_A;t\right>,
\end{equation}
where the address $J_A$ is defined as follows:
\begin{equation}\label{J_numbering}
J_A \equiv J_A(\n) = 1 + \sum_{k=1}^{M_A-1} \binom{N_A+M_A-1-k-\sum_{l=1}^{k}n_l}{M_A-k}.
\end{equation}
The notation used in (\ref{MCTDHF_ansatz}-\ref{J_numbering})
follows the
Combinadic-based addressing
scheme of configurations introduced in \cite{mapping}.
For fermions we enumerate configurations by holes, ${\bf i}= (i_1,i_2,\ldots,i_j=q,\ldots,i_{M_A-N_A})$ and
$\i^{kq} = (i_1,i_2,\ldots,i_l=k,\ldots,i_{M_A-N_A})$,
whereas for bosons we enumerate configurations by particles,
${\bf n}= (n_1,\ldots,n_k,\ldots,n_q,\ldots,n_{M_A})$ and
$\n^{kq} = (n_1,\ldots,n_k-1,\ldots,n_q+1,\ldots,n_{M_A})$.
The index $J_A$ is termed ``address"
because it is an integer uniquely identifying a configuration which is described
by the positions of the holes $\i$ (for fermions) or the occupation numbers $\n$ (for bosons).
For more details of the Combinadic-based mapping and particularly
the connection between the bosonic occupation numbers
and the positions of the fermionic holes see \cite{mapping}.
For our requirements,
we will need the result of the operation of
the basic building bricks
onto the state vector,
namely, the operation of the one-body density
operators $\left\{\hat \rho^{(A)}_{kq}\right\}$
onto $\left|\Psi^{(A)}(t)\right>$.
Thus we have:
\begin{equation}\label{O_den}
\hat \rho^{(A)}_{kq} \left|\Psi^{(A)}(t)\right> = \hat \rho^{(A)}_{kq} \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1}
C_{J_A}(t) \left|J_A;t\right> \equiv
\sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t) \left|J_A;t\right>.
\end{equation}
For fermions we have the following relations \cite{mapping}:
\begin{equation}n\label{basic_mapping_F}
\!\!\!\!\!\!\!\! C^{\hat \rho^{(A)}_{kq}}_{J_A}(t) \equiv C^{\hat \rho^{(A)}_{kq}}_{J_A(\i)}(t) &=&
\left \{
\begin{matrix}
C_{J_A(\i^{kq})}(t) \times (-1)^{d(\i^{kq})}; & \ k \ne q, \ k \in \i^{kq}, \ q \not\in \i^{kq}\\
C_{J_A(\i)}(t); & \ k = q, \ k \not\in \i\\
0; & \ {\mathrm{otherwise}} \\
\end{matrix}
\right., \
\end{equation}n
where the distance between the $i_j$-th hole of $\i$ at orbital $q$ and the $i_l$-th hole of $\i^{kq}$
at orbital $k$ is
given by $d(\i^{kq}) = |k-q| - |j-l| - 1$
[equivalently, $d(\i^{kq}) = \sum_{p \in (k,q)} n_p$ simply enumerates how many fermions are there between
the $k$-th and $q$-th orbitals].
For bosons we have the following relations \cite{mapping}:
\begin{equation}n\label{basic_mapping_B}
C^{\hat \rho^{(A)}_{kq}}_{J_A}(t) \equiv C^{\hat \rho^{(A)}_{kq}}_{J_A(\n)}(t) &=&
\left \{
\begin{matrix}
C_{J_A(\n^{kq})}(t) \times \sqrt{n_k} \sqrt{n_q +1}; & \ k \ne q \\
C_{J_A(\n)}(t) \times n_k; & \ k = q \\
\end{matrix}
\right., \
\end{equation}n
which concludes our exposition of the Combinadic-based mapping
and assembly of
the operations of the basic building bricks $\left\{\hat \rho^{(A)}_{kq}\right\}$
on the many-body wave-function $\left|\Psi^{(A)}(t)\right>$.
From Eqs.~(\ref{density_oper_1B},\ref{density_oper_2B_3B}) we see how
to use the one-body (basic) building bricks $\left\{\hat \rho^{(A)}_{kq}\right\}$
to assemble higher-body operators.
In particular we find:
\begin{equation}n\label{basic_mapping_2B_3B}
& & C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t) = \pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t)
\mp {C^{\hat \rho^{(A)}_{sq}}_{J_A}}^{\hat \rho^{(A)}_{kl}}\!\!(t), \nonumber \\
& & C^{\hat \rho^{(A)}_{ksprlq}}_{J_A}(t) = \pm \delta_{pr} C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t)
- \delta_{pl} C^{\hat \rho^{(A)}_{ksrq}}_{J_A}(t) +
{C^{\hat \rho^{(A)}_{pq}}_{J_A}}^{\hat \rho^{(A)}_{ksrl}}\!(t). \
\end{equation}n
The meaning of the two levels of density operators in the
superscripts of the coefficients
$C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t)$
and
$C^{\hat \rho^{(A)}_{ksprlq}}_{J_A}(t)$,
resulting from higher-body operators
in (\ref{basic_mapping_2B_3B}),
is that the lower-level density operator is multiplied on the many-body wave-function first,
and the upper-level
density operator is multiplied thereafter on the result.
The key ingredient in the utilization of the Lagrangian formulation \cite{MCTDHB1,LF1,LF2}
of the (Dirac-Frenkel \cite{DF1,DF2})
time-dependent variational principle to derive the equations-of-motion is
the evaluation of matrix elements with respect to the
multiconfigurational wave-function $\left|\Psi^{(A)}(t)\right>$.
This will be utilized in the next subsection \ref{SEC2.2}.
For the moment,
we would like to prescribe how
such matrix elements with respect to $\left|\Psi^{(A)}(t)\right>$
are to be evaluated.
| 3,906 | 51,553 |
en
|
train
|
0.176.3
|
For our requirements,
we will need the result of the operation of
the basic building bricks
onto the state vector,
namely, the operation of the one-body density
operators $\left\{\hat \rho^{(A)}_{kq}\right\}$
onto $\left|\Psi^{(A)}(t)\right>$.
Thus we have:
\begin{equation}\label{O_den}
\hat \rho^{(A)}_{kq} \left|\Psi^{(A)}(t)\right> = \hat \rho^{(A)}_{kq} \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1}
C_{J_A}(t) \left|J_A;t\right> \equiv
\sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t) \left|J_A;t\right>.
\end{equation}
For fermions we have the following relations \cite{mapping}:
\begin{equation}n\label{basic_mapping_F}
\!\!\!\!\!\!\!\! C^{\hat \rho^{(A)}_{kq}}_{J_A}(t) \equiv C^{\hat \rho^{(A)}_{kq}}_{J_A(\i)}(t) &=&
\left \{
\begin{matrix}
C_{J_A(\i^{kq})}(t) \times (-1)^{d(\i^{kq})}; & \ k \ne q, \ k \in \i^{kq}, \ q \not\in \i^{kq}\\
C_{J_A(\i)}(t); & \ k = q, \ k \not\in \i\\
0; & \ {\mathrm{otherwise}} \\
\end{matrix}
\right., \
\end{equation}n
where the distance between the $i_j$-th hole of $\i$ at orbital $q$ and the $i_l$-th hole of $\i^{kq}$
at orbital $k$ is
given by $d(\i^{kq}) = |k-q| - |j-l| - 1$
[equivalently, $d(\i^{kq}) = \sum_{p \in (k,q)} n_p$ simply enumerates how many fermions are there between
the $k$-th and $q$-th orbitals].
For bosons we have the following relations \cite{mapping}:
\begin{equation}n\label{basic_mapping_B}
C^{\hat \rho^{(A)}_{kq}}_{J_A}(t) \equiv C^{\hat \rho^{(A)}_{kq}}_{J_A(\n)}(t) &=&
\left \{
\begin{matrix}
C_{J_A(\n^{kq})}(t) \times \sqrt{n_k} \sqrt{n_q +1}; & \ k \ne q \\
C_{J_A(\n)}(t) \times n_k; & \ k = q \\
\end{matrix}
\right., \
\end{equation}n
which concludes our exposition of the Combinadic-based mapping
and assembly of
the operations of the basic building bricks $\left\{\hat \rho^{(A)}_{kq}\right\}$
on the many-body wave-function $\left|\Psi^{(A)}(t)\right>$.
From Eqs.~(\ref{density_oper_1B},\ref{density_oper_2B_3B}) we see how
to use the one-body (basic) building bricks $\left\{\hat \rho^{(A)}_{kq}\right\}$
to assemble higher-body operators.
In particular we find:
\begin{equation}n\label{basic_mapping_2B_3B}
& & C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t) = \pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t)
\mp {C^{\hat \rho^{(A)}_{sq}}_{J_A}}^{\hat \rho^{(A)}_{kl}}\!\!(t), \nonumber \\
& & C^{\hat \rho^{(A)}_{ksprlq}}_{J_A}(t) = \pm \delta_{pr} C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t)
- \delta_{pl} C^{\hat \rho^{(A)}_{ksrq}}_{J_A}(t) +
{C^{\hat \rho^{(A)}_{pq}}_{J_A}}^{\hat \rho^{(A)}_{ksrl}}\!(t). \
\end{equation}n
The meaning of the two levels of density operators in the
superscripts of the coefficients
$C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t)$
and
$C^{\hat \rho^{(A)}_{ksprlq}}_{J_A}(t)$,
resulting from higher-body operators
in (\ref{basic_mapping_2B_3B}),
is that the lower-level density operator is multiplied on the many-body wave-function first,
and the upper-level
density operator is multiplied thereafter on the result.
The key ingredient in the utilization of the Lagrangian formulation \cite{MCTDHB1,LF1,LF2}
of the (Dirac-Frenkel \cite{DF1,DF2})
time-dependent variational principle to derive the equations-of-motion is
the evaluation of matrix elements with respect to the
multiconfigurational wave-function $\left|\Psi^{(A)}(t)\right>$.
This will be utilized in the next subsection \ref{SEC2.2}.
For the moment,
we would like to prescribe how
such matrix elements with respect to $\left|\Psi^{(A)}(t)\right>$
are to be evaluated.
Consider the operator $\hat O^{(A)}$,
which can be a one-body operator, two-body operator, three-body operator, etc.
Then, we express and compute the expectation value of $\hat O^{(A)}$
with respect to $\left|\Psi^{(A)}(t)\right>$
as follows \cite{mapping}:
\begin{equation}\label{expectation}
\left<\Psi^{(A)}(t)\left| \hat O^{(A)} \right|\Psi^{(A)}(t)\right> =
\left<\Psi^{(A)}(t)\left| \left\{ \hat O^{(A)} \right|\Psi^{(A)}(t)\right> \right\} =
\sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^\ast_{J_A}(t) C^{\hat O^{(A)}}_{J_A}(t),
\end{equation}
where
\begin{equation}\label{O_Psi}
\hat O^{(A)} \left|\Psi^{(A)}(t)\right> =
\hat O^{(A)} \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C_{J_A}(t) \left|J_A;t\right> \equiv
\sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^{\hat O^{(A)}}_{J_A}(t) \left|J_A;t\right>.
\end{equation}
In particular,
for a one-body operator,
$\hat O^{(A)} = \sum_{k,q} O^{(A)}_{kq} \hat \rho^{(A)}_{kq}$, we get:
\begin{equation}\label{C_one}
C^{\hat O^{(A)}}_{J_A}(t) = \sum_{k,q}^{M_A} O^{(A)}_{kq} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t),
\end{equation}
for a two-body operator, $\hat O^{(A)} = \frac{1}{2} \sum_{k,s,q,l} O^{(A)}_{ksql} \hat \rho^{(A)}_{kslq}$,
we get from (\ref{basic_mapping_2B_3B}):
\begin{equation}n\label{C_two}
C^{\hat O^{(A)}}_{J_A}(t) &=& \frac{1}{2}
\sum_{k,s,q,l}^{M_A} O^{(A)}_{ksql} C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t) = \nonumber \\
&=& \frac{1}{2} \sum_{k,s,q,l}^{M_A} O^{(A)}_{ksql} \left[ \pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t)
\mp {C^{\hat \rho^{(A)}_{sq}}_{J_A}}^{\hat \rho^{(A)}_{kl}}\!\!(t) \right], \
\end{equation}n
and for a
three-body operator, $\hat O^{(A)} = \frac{1}{6}
\sum_{k,s,p,r,l,q} O^{(A)}_{kspqlr} \hat \rho^{(A)}_{ksprlq}$,
we get from (\ref{basic_mapping_2B_3B}):
\begin{equation}n\label{C_three}
& &
C^{\hat O^{(A)}}_{J_A}(t) = \frac{1}{6} \sum_{k,s,p,r,l,q}^{M_A}
O^{(A)}_{kspqlr} C^{\hat \rho^{(A)}_{ksprlq}}_{J_A}(t) = \\
&=& \frac{1}{6} \sum_{k,s,p,r,l,q}^{M_A} O^{(A)}_{kspqlr} \left[ \pm \delta_{pr} C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t)
- \delta_{pl} C^{\hat \rho^{(A)}_{ksrq}}_{J_A}(t) +
{C^{\hat \rho^{(A)}_{pq}}_{J_A}}^{\hat \rho^{(A)}_{ksrl}}\!(t)
\right]. \nonumber \
\end{equation}n
Finally and generally,
the result of a sum of (operations of) operators, e.g., $\hat O_1^{(A)} + \hat O_2^{(A)}$,
on $\left|\Psi^{(A)}(t)\right>$
translates to the sum of the respective coefficients \cite{mapping}:
\begin{equation}\label{operators_sum}
C^{\hat O_1^{(A)} + \hat O_2^{(A)}}_{J_A}(t) = C^{\hat O_1^{(A)}}_{J_A}(t) + C^{\hat O_2^{(A)}}_{J_A}(t).
\end{equation}
These compact relations resting on one-body density operators only
[the two-body density operators in (\ref{C_three})
are assembled from one-body density
operators according to (\ref{density_oper_1B},\ref{density_oper_2B_3B})]
will be used to reformulate MCTDHF and MCTDHB
in a recursive manner in the following subsection \ref{SEC2.2}.
| 2,640 | 51,553 |
en
|
train
|
0.176.4
|
\subsection{Equations-of-motion utilizing one-body density operators
and Combinadic-based mapping}\label{SEC2.2}
We can derive (reconstruct)
the MCTDHF and MCTDHB equations-of-motion,
taking into account {\it a-priori}
that matrix elements of the form of (\ref{expectation}) enter the variational formulation.
Within the Lagrangian formulation \cite{MCTDHB1,LF1,LF2} of the (Dirac-Frenkel \cite{DF1,DF2})
time-dependent variational principle,
the action functional of the time-dependent
many-particle Schr\"odinger equation takes
on the following form:
\begin{equation}n\label{func_basic}
& & S\left[\left\{C_{J_A}(t)\right\},\left\{\phi_k(\x,t)\right\}\right] =
\int dt \Bigg\{\left< \Psi^{(A)}(t) \left| \hat H^{(A)} - i\frac{\partial}{\partial t}\right| \Psi^{(A)}(t)\right>
- \nonumber \\
& & \qquad - \sum_{k,j}^{M_A} \mu_{kj}^{(A)}(t) \left[\left<\phi_k \left|\right.\phi_j\right> - \delta_{kj}\right]
- \varepsilon^{(A)}(t) \left[\sum_{J_A=1}^{N^{(A)}_{\mathit{conf}}} \left|C_{J_A}(t)\right|^2 - 1 \right]\Bigg\}, \
\end{equation}n
where the time-dependent Lagrange multipliers
$\left\{\mu_{kj}^{(A)}(t)\right\}$ are introduced
to guarantee the orthonormality
of the orbitals at all times.
Furthermore,
they enable one to first evaluate the expectation
value of $\hat H^{(A)} - i\frac{\partial}{\partial t}$
with respect to $\left|\Psi^{(A)}(t)\right>$
and then to perform the variation,
which is precisely what is needed in
order to exploit the Combinadic-based
mapping \cite{mapping} {\it a-priori}
in the derivation of the equations-of-motion.
The (here redundant) time-dependent Lagrange multiplier $\varepsilon^{(A)}(t)$
ensures normalization of the expansion coefficients at all times,
and would resurface in the static theory
in the case of the
imaginary-time-propagation formulation.
To perform the variation of the action functional with
respect to the coefficients,
we express the expectation value
$\left< \Psi^{(A)}(t) \left| \hat H^{(A)} - i\frac{\partial}{\partial t}\right| \Psi^{(A)}(t)\right>$
following the Combinadic-based
mapping \cite{mapping}
and the compact expression in Eq.~(\ref{expectation}):
\begin{equation}\label{expectation_H_C}
\left<\Psi^{(A)}(t)\left| \hat H^{(A)} - i\frac{\partial}{\partial t} \right|\Psi^{(A)}(t)\right> =
\sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1}
C^\ast_{J_A}(t) \left[ C^{\hat H^{(A)} - i\frac{\partial}{\partial t}^{(A)}}_{J_A}\!(t) - i \dot C_{J_A}(t) \right].
\end{equation}
Representation (\ref{expectation_H_C}) makes
it clear what the variation with respect to the
coefficients $\left\{C^\ast_{J_A}(t)\right\}$ would lead to.
When this
variation
is performed explicitly,
one immediately finds:
\begin{equation}\label{C_gen}
C^{\hat H^{(A)} - i\frac{\partial}{\partial t}^{(A)}}_{J_A}\!(t) = i \dot C_{J_A}(t), \qquad \forall J_A.
\end{equation}
The meaning of $i\frac{\partial}{\partial t}^{(A)}$
is that the time-derivative is a one-body operator in the
$A$-species Fock (and orbital) space.
According to the rules of the previous subsection \ref{SEC2.1},
the left-hand-side of Eq.~(\ref{C_gen}) is given by the sum of its
one-, two- and three-body constituents:
\begin{equation}\label{SE_C_gen}
C^{\hat H^{(A)} - i\frac{\partial}{\partial t}^{(A)}}_{J_A}\!(t) =
C^{\hat h^{(A)} - i\frac{\partial}{\partial t}^{(A)}}_{J_A}\!(t) +
C^{\hat W^{(A)}}_{J_A}(t) + C^{\hat U^{(A)}}_{J_A}(t).
\end{equation}
The invariance of $\left|\Psi^{(A)}(t)\right>$
to unitary transformations of the orbitals,
compensated by the `reverse' transformations
of the orbitals is well-known \cite{cpl,jcp,MCTDHB1} and
can be represented as follows:
$\left|\Psi^{(A)}(t)\right> = \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C_{J_A}(t) \left|J_A;t\right> =$\break
$\sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} \overline{C}_{J_A}(t) \overline{\left|J_A;t\right>}$,
with obvious notation.
This invariance can
be utilized to bring the equations-of-motion into a simpler form
(see, in particular, the discussion below on the orbitals' part).
Primarily, the differential conditions first introduced
by the MCTDH founders \cite{cpl,jcp}:
\begin{equation}\label{diff_con_A}
\left\{i\frac{\partial}{\partial t}^{(A)}\right\}_{kq} \equiv
i\left<\phi_k \left|\dot\phi_q\right>\right. = 0, \ \ k,q=1,\ldots,M_A,
\end{equation}
come
out explicitly from such a unitary transformation \cite{MCTDHB1,conversion} and
straightforwardly
lead
in the case of the
equations-of-motion for the coefficients to:
\begin{equation}n\label{C_gen_phi_phidot}
& & C^{\hat H^{(A)}}_{J_A}(t) = i \dot C_{J_A}(t), \qquad \forall J_A, \nonumber \\
& & C^{\hat H^{(A)}}_{J_A}(t) =
C^{\hat h^{(A)}}_{J_A}(t) + C^{\hat W^{(A)}}_{J_A}(t) + C^{\hat U^{(A)}}_{J_A}(t). \
\end{equation}n
For the general form of the differential conditions,
Eq.~(\ref{diff_con_A}),
see the literature \cite{review,book}.
We remark that a particular interesting representation
(put forward and utilized so far for distinguishable degrees-of-freedom only)
of the differential conditions
can be made in order
to propagate the systems'
natural orbitals \cite{Uwe_nat1,Uwe_nat2}.
In MCTDHF and MCTDHB
the integration of the coefficients' part in time
is performed (for unitary time-evolution)
by the short iterative Lanczos (SIL) algorithm \cite{SIL}.
We remark on the numerical implementation of Eq.~(\ref{C_gen_phi_phidot})
within SIL propagation \cite{package}.
For the SIL one needs to operate with powers of $\hat H$ onto the many-particle wave-function
and construct the $K$-dimensional Krylov subspace:
$\left\{\left|\Psi^{(A)}(t)\right>, \hat H^{(A)}\left|\Psi^{(A)}(t)\right>,\ldots,
\hat{H}^{(A)}\strut^{K-1} \left|\Psi^{(A)}(t)\right> \right\}$.
In the language of the Combinadic-based mapping of coefficients
and utilizing the recipe of
how to operate with operators on the many-particle wave-function
discussed above \cite{mapping},
this construction translates to:
$\left\{C_{J_A}(t), C^{\hat H^{(A)}}_{J_A}(t),{C^{\hat H^{(A)}}_{J_A}}^{\hat H^{(A)}}\!(t),\ldots\right\}$.
Let us now move to the equations-of-motion for the orbitals
$\left\{\phi_k(\x,t)\right\}$.
For this,
the expectation value of the many-body Hamiltonian $\hat H^{(A)}$
with respect to $\left|\Psi^{(A)}(t)\right>$ has to be expressed
in a form which allows for variation with respect to the orbitals,
namely as an
explicit function of the quantities (integrals) $h^{(A)}_{kq}$,
$W^{(A)}_{ksql}$ and $U^{(A)}_{kspqlr}$ in (\ref{matrix_elements}).
The result reads:
\begin{equation}n\label{expectation_H3_phi}
\!\!\!\!\!\!\!\! & &
\left<\Psi\left|\hat H^{(A)} - i\frac{\partial}{\partial t} \right|\Psi\right> =
\sum_{k,q=1}^{M_A} \rho^{(A)}_{kq} \left[ h^{(A)}_{kq} -
\left\{i\frac{\partial}{\partial t}^{(A)}\right\}_{kq} \right] + \\
& &
+ \frac{1}{2}\sum_{k,s,l,q=1}^{M_A} \rho^{(A)}_{kslq} W^{(A)}_{ksql}
+ \frac{1}{6}\sum_{k,s,p,r,l,q=1}^{M_A} \rho^{(A)}_{ksprlq} U^{(A)}_{kspqlr}
- \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1}
i C^\ast_{J_A}(t) \dot C_{J_A}(t). \nonumber \
\end{equation}n
The expectation values of the density operators $\hat \rho^{(A)}_{kq}$,
$\hat \rho^{(A)}_{kslq}$ and $\hat \rho^{(A)}_{ksprlq}$
with respect to $\left|\Psi^{(A)}(t)\right>$
(resulting from the expectation value of the Hamiltonian
with respect to many-particle wave-function) are computed
following Eq.~(\ref{expectation}):
\begin{equation}n\label{denisty_matrx_element}
& & \rho^{(A)}_{kq} = \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^\ast_{J_A}(t) C^{\hat \rho^{(A)}_{kq}}_{J_A}(t),
\qquad \rho^{(A)}_{kslq} = \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^\ast_{J_A}(t) C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t),
\nonumber \\
& & \qquad \qquad
\rho^{(A)}_{ksprlq} = \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^\ast_{J_A}(t) C^{\hat \rho^{(A)}_{ksprlq}}_{J_A}(t), \
\end{equation}n
where the
coefficients
$C^{\hat \rho^{(A)}_{kq}}_{J_A}(t)$, $C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t)$
and $C^{\hat \rho^{(A)}_{ksprlq}}_{J_A}(t)$
are given in Eqs.~(\ref{basic_mapping_F},\ref{basic_mapping_B})
and (\ref{basic_mapping_2B_3B}),
respectively.
We collect the expectation values
of the one-body density operators
as the matrix $\brho^{(A)}(t)=\left\{\rho^{(A)}_{kq}(t)\right\}$.
One should remember that
the expectation values
of two- and three-body
density operators can generally not be factorized
into products of expectation values of one-body density operators.
For instance (and in the language of the Combinadic-based mapping
of coefficients),
$C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t) = \pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t)
\mp {C^{\hat \rho^{(A)}_{sq}}_{J_A}}^{\hat \rho^{(A)}_{kl}}\!\!(t)
\ne
\pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t)
\mp C^{\hat \rho^{(A)}_{kl}}_{J_A}(t) C^{\hat \rho^{(A)}_{sq}}_{J_A}(t)$.
This is unlike the operation of the density operators themselves
on the many-particle wave-function utilized above.
| 3,233 | 51,553 |
en
|
train
|
0.176.5
|
In MCTDHF and MCTDHB
the integration of the coefficients' part in time
is performed (for unitary time-evolution)
by the short iterative Lanczos (SIL) algorithm \cite{SIL}.
We remark on the numerical implementation of Eq.~(\ref{C_gen_phi_phidot})
within SIL propagation \cite{package}.
For the SIL one needs to operate with powers of $\hat H$ onto the many-particle wave-function
and construct the $K$-dimensional Krylov subspace:
$\left\{\left|\Psi^{(A)}(t)\right>, \hat H^{(A)}\left|\Psi^{(A)}(t)\right>,\ldots,
\hat{H}^{(A)}\strut^{K-1} \left|\Psi^{(A)}(t)\right> \right\}$.
In the language of the Combinadic-based mapping of coefficients
and utilizing the recipe of
how to operate with operators on the many-particle wave-function
discussed above \cite{mapping},
this construction translates to:
$\left\{C_{J_A}(t), C^{\hat H^{(A)}}_{J_A}(t),{C^{\hat H^{(A)}}_{J_A}}^{\hat H^{(A)}}\!(t),\ldots\right\}$.
Let us now move to the equations-of-motion for the orbitals
$\left\{\phi_k(\x,t)\right\}$.
For this,
the expectation value of the many-body Hamiltonian $\hat H^{(A)}$
with respect to $\left|\Psi^{(A)}(t)\right>$ has to be expressed
in a form which allows for variation with respect to the orbitals,
namely as an
explicit function of the quantities (integrals) $h^{(A)}_{kq}$,
$W^{(A)}_{ksql}$ and $U^{(A)}_{kspqlr}$ in (\ref{matrix_elements}).
The result reads:
\begin{equation}n\label{expectation_H3_phi}
\!\!\!\!\!\!\!\! & &
\left<\Psi\left|\hat H^{(A)} - i\frac{\partial}{\partial t} \right|\Psi\right> =
\sum_{k,q=1}^{M_A} \rho^{(A)}_{kq} \left[ h^{(A)}_{kq} -
\left\{i\frac{\partial}{\partial t}^{(A)}\right\}_{kq} \right] + \\
& &
+ \frac{1}{2}\sum_{k,s,l,q=1}^{M_A} \rho^{(A)}_{kslq} W^{(A)}_{ksql}
+ \frac{1}{6}\sum_{k,s,p,r,l,q=1}^{M_A} \rho^{(A)}_{ksprlq} U^{(A)}_{kspqlr}
- \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1}
i C^\ast_{J_A}(t) \dot C_{J_A}(t). \nonumber \
\end{equation}n
The expectation values of the density operators $\hat \rho^{(A)}_{kq}$,
$\hat \rho^{(A)}_{kslq}$ and $\hat \rho^{(A)}_{ksprlq}$
with respect to $\left|\Psi^{(A)}(t)\right>$
(resulting from the expectation value of the Hamiltonian
with respect to many-particle wave-function) are computed
following Eq.~(\ref{expectation}):
\begin{equation}n\label{denisty_matrx_element}
& & \rho^{(A)}_{kq} = \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^\ast_{J_A}(t) C^{\hat \rho^{(A)}_{kq}}_{J_A}(t),
\qquad \rho^{(A)}_{kslq} = \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^\ast_{J_A}(t) C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t),
\nonumber \\
& & \qquad \qquad
\rho^{(A)}_{ksprlq} = \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^\ast_{J_A}(t) C^{\hat \rho^{(A)}_{ksprlq}}_{J_A}(t), \
\end{equation}n
where the
coefficients
$C^{\hat \rho^{(A)}_{kq}}_{J_A}(t)$, $C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t)$
and $C^{\hat \rho^{(A)}_{ksprlq}}_{J_A}(t)$
are given in Eqs.~(\ref{basic_mapping_F},\ref{basic_mapping_B})
and (\ref{basic_mapping_2B_3B}),
respectively.
We collect the expectation values
of the one-body density operators
as the matrix $\brho^{(A)}(t)=\left\{\rho^{(A)}_{kq}(t)\right\}$.
One should remember that
the expectation values
of two- and three-body
density operators can generally not be factorized
into products of expectation values of one-body density operators.
For instance (and in the language of the Combinadic-based mapping
of coefficients),
$C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t) = \pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t)
\mp {C^{\hat \rho^{(A)}_{sq}}_{J_A}}^{\hat \rho^{(A)}_{kl}}\!\!(t)
\ne
\pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t)
\mp C^{\hat \rho^{(A)}_{kl}}_{J_A}(t) C^{\hat \rho^{(A)}_{sq}}_{J_A}(t)$.
This is unlike the operation of the density operators themselves
on the many-particle wave-function utilized above.
We can now perform
the variation of
$S\left[\left\{C_{J_A}(t)\right\},\left\{\phi_k(\x,t)\right\}\right]$
with respect to the orbitals.
This variation has been detailed
in the literature, see \cite{MCTDHB1,unified}, and we give here
the main steps in the derivation of the equations-of-motion
as far as they are needed for our needs later on.
Making use of the orthonormality relation between the
time-dependent orbitals $\left\{\phi_k(\x,t)\right\}$,
we can solve for the Lagrange multipliers,
$k,j = 1,\ldots,M_A$:
\begin{equation}n\label{MCTDHX_H3_mu}
& & \!\!\!\!\!\!\!\! \mu_{kj}^{(A)}(t) = \\
& & \!\!\!\!\!\!\!\! =
\left<\phi_j\left|
\sum^{M_A}_{q=1} \left( \rho^{(A)}_{kq} \left[ \hat h^{(A)}
- i\frac{\partial}{\partial t}^{(A)} \right] + \sum^{M_A}_{s,l=1}\rho^{(A)}_{kslq} \hat W^{(A)}_{sl}
+ \frac{1}{2}\sum_{s,p,r,l=1}^{M_A} \rho^{(A)}_{ksprlq} \hat U^{(A)}_{splr} \right) \right|\phi_q\right>. \nonumber \
\end{equation}n
The Lagrange multipliers $\left\{\mu_{kj}^{(A)}(t)\right\}$ can
be eliminated from the equations-of-motion which
is achieved by the introduction
of the projection operator:
\begin{equation}\label{project_A}
\hat {\mathbf P}^{(A)} = 1 - \sum_{u=1}^{M_A} \left|\phi_{u}\right>\left<\phi_{u}\right|.
\end{equation}
When this is done,
we find the following equations-of-motion for the orbitals $\left\{\phi_j(\x,t)\right\}$,
$j=1,\ldots,M_A$:
\begin{equation}n\label{MCTDHX_P_P_H3_eom}
& & \hat {\mathbf P}^{(A)} i\left|\dot\phi_j\right> = \hat {\mathbf P}^{(A)}
\Bigg[\hat h^{(A)} \left|\phi_j\right> + \\
& & + \sum^{M_A}_{k,q=1}
\left\{\brho^{(A)}(t)\right\}^{-1}_{jk}
\sum^{M_A}_{s,l=1}
\left(\rho^{(A)}_{kslq} \hat{W}^{(A)}_{sl}
+\frac{1}{2}\sum_{p,r=1}^{M_A} \rho^{(A)}_{ksprlq} \hat U^{(A)}_{splr} \right)
\left|\phi_q\right> \Bigg], \nonumber \
\end{equation}n
where
\begin{equation}n\label{TD_1B_2_3_potentials}
& & \hat W^{(A)}_{sl}(\x,t)=\int\phi_s^\ast(\x',t) \hat W^{(A)}(\x,\x') \phi_l(\x',t) d\x', \\
& & \hat U^{(A)}_{splr}(\x,t) = \int \!\! \int \phi_s^\ast(\x',t)
\phi_p^\ast(\x'',t) \hat U^{(A)}(\x,\x',\x'') \phi_l(\x',t) \phi_r(\x'',t) d\x' d\x'', \nonumber
\end{equation}n
are local (for spin-independent interactions),
time-dependent one-body potentials,
and $\dot \phi_j \equiv \frac{\partial\phi_j}{\partial t}$.
Utilizing the differential conditions (\ref{diff_con_A})
we can eliminate the projection operator $\hat {\mathbf P}^{(A)}$
appearing on the left-hand-side of
Eq.~(\ref{MCTDHX_P_P_H3_eom}) and arrive at
the final result for the equations-of-motion of
the orbitals in MCTDHF and MCTDHB (see \cite{book,unified}),
$j=1,\ldots,M_A$:
\begin{equation}n\label{MCTDHX_P_H3_eom}
& & i\left|\dot\phi_j\right> = \hat {\mathbf P}^{(A)}
\Bigg[\hat h^{(A)} \left|\phi_j\right> + \\
& & + \sum^{M_A}_{k,q=1}
\left\{\brho^{(A)}(t)\right\}^{-1}_{jk}
\sum^{M_A}_{s,l=1}
\left(\rho^{(A)}_{kslq} \hat{W}^{(A)}_{sl}
+\frac{1}{2}\sum_{p,r=1}^{M_A} \rho^{(A)}_{ksprlq} \hat U^{(A)}_{splr} \right)
\left|\phi_q\right> \Bigg]. \nonumber \
\end{equation}n
Summarizing,
the coupled sets of equations-of-motion (\ref{C_gen_phi_phidot})
for the expansion coefficients and (\ref{MCTDHX_P_H3_eom})
for the orbitals
constitute
the MCTDHF and MCTDHB methods,
where the one-body density operators
(\ref{density_oper_1B},\ref{density_oper_2B_3B})
are employed as the basic building bricks
in their construction and implementation.
We can also
propagate the MCTDHF and MCTDHB
equations-of-motion (\ref{C_gen_phi_phidot},\ref{MCTDHX_P_H3_eom})
in imaginary time and
arrive for time-independent Hamiltonians
at the corresponding self-consistent
static theories, MCHF \cite{gen_MCHF1,gen_MCHF2} and MCHB \cite{MCHB}.
Thus,
setting $t \to -it$ into the coupled sets (\ref{C_gen},\ref{MCTDHX_P_P_H3_eom})
or into (\ref{C_gen_phi_phidot},\ref{MCTDHX_P_H3_eom}),
and translating back from the projection operator $\hat {\mathbf P}^{(A)}$
to the Lagrange multipliers $\left\{\mu_{kj}^{(A)}\right\}$,
the final result reads, $k=1,\ldots,M_A$:
\begin{equation}n\label{MCTDH_H3_stationary}
& & \!\!\!\!\!\!\!\!
\sum_{q=1}^{M_A} \left[ \rho^{(A)}_{kq} \hat h^{(A)} +
\sum^{M_A}_{s,l=1} \left(\rho^{(A)}_{kslq} \hat{W}^{(A)}_{sl}
+ \frac{1}{2}\sum_{p,r=1}^{M_A} \rho^{(A)}_{ksprlq} \hat U^{(A)}_{splr}\right)
\right] \left|\phi_q\right> =
\sum_{j=1}^{M_A} \mu_{kj}^{(A)} \left|\phi_j\right>, \nonumber \\
& &
\qquad \qquad
C^{\hat H^{(A)}}_{J_A} = \varepsilon^{(A)} C_{J_A}, \qquad \forall J_A, \
\end{equation}n
where, making use of the normalization of the many-particle wave-function,
$\varepsilon^{(A)}= \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^\ast_{J_A} C^{\hat H^{(A)}}_{J_A}$
is the eigen-energy of the system.
Making use of the fact that the matrix of Lagrange multipliers
$\{\mu_{kj}^{(A)}\}$ is Hermitian (for stationary states)
and of
the invariance property of the multiconfigurational wave-function
(to unitary transformations
of the orbitals
compensated by the `reverse'
transformations
of the coefficients),
one can transform Eq.~(\ref{MCTDH_H3_stationary})
to a representation where $\{\mu_{kj}^{(A)}\}$
is a diagonal matrix.
All in all,
we have formulated in the present section
the MCTDHF and MCTDHB equations-of-motion,
as well as their static
variants MCHF and MCHB,
by (i) utilizing in a recursive manner one-body density operators only,
and by (ii) employing {\it a priori} the Combinadic-based mapping
formulation of Ref.~\cite{mapping} to evaluate matrix elements.
This sets up the tools to
put forward the MCTDH theory
for mixtures of three kinds of identical particles in the following Sec.~\ref{SEC3},
and to briefly
discuss its structure and properties,
and how to implement it.
| 3,710 | 51,553 |
en
|
train
|
0.176.6
|
\section{Three kinds of identical particles: MCTDH-FFF, MCTDH-BFF, MCTDH-BBF and MCTDH-BBB}\label{SEC3}
In the present section we specify the MCTDH theory for mixtures of three
kinds of identical particles, interacting with up to three-body forces.
Before we get into the details of derivation and flood of equations,
we would like to lay out a general scheme or flowchart that one can follow to handle
similar or even more complex mixtures.
Specifically, we need to assign a different set of time-dependent orthonormal orbitals to each and every species in the mixture.
These orbitals are then used to assemble the time-dependent configurations
(with determinants' parts for fermions and permanents' parts for bosons).
The many-particle wave-function is thereafter assembled as a linear combination of
all time-dependent configurations with time-dependent expansion coefficients.
The many-particle Hamiltonian contains different terms:
It contains intra-species terms and inter-species terms which consist of two-body,
three-body and so on interactions.
The main point in the representation of the Hamiltonian is the utilization of one-body density operators.
In turn, all intra-species and inter-species interactions can be represented
utilizing (products of) one-body density operators only.
The key step in the derivation of the equations-of-motion is the utilization of the
Lagrangian formulation \cite{MCTDHB1,LF1,LF2} of the (Dirac-Frenkel \cite{DF1,DF2})
time-dependent variational principle with Lagrange multipliers for each species' orbitals, ensuring thereby
the orthonormality of the orbitals for all times. In such a way,
matrix-elements appear within the formulation explicitly,
before the variation with respect to either the expansion coefficients
or the orbitals is performed. The equations-of-motion for the expansion
coefficients of the multiconfigurational wave-function are obtained by taking the
variation of the action functional when it is expressed explicitly in terms of the expansion coefficients.
The Combinadic-based mapping \cite{mapping} lifts the necessity to work with the huge matrix
representation of the Hamiltonian with respect to the configurations,
and allows one to efficiently perform operations on the vector of expansion coefficients directly.
The equations-of-motion for the orbitals are obtained by taking the variation of the action
functional when it is expressed explicitly in terms of the (integrals of the) orbitals.
When this is performed, expectation values of the various density
operators in the Hamiltonian (with respect to the many-particle wave-function) emerge
which can be efficiently computed utilizing the Combinadic-based mapping \cite{mapping}.
| 677 | 51,553 |
en
|
train
|
0.176.7
|
\subsection{Additional ingredients
for mixtures}\label{SEC3.1}
For a mixture of three kinds of identical particles,
$N_A$ particles of type $A$,
$N_B$ particles of type $B$
and
$N_C$ particles of type $C$,
we need now two additional field operators expanded by different complete
sets of time-dependent orbitals:
\begin{equation}\label{field_3Mix}
\hat{\mathbf \Psi}_B(\y) = \sum_{k'} \hat b_{k'}(t) \psi_{k'}(\y,t), \qquad
\hat{\mathbf \Psi}_C(\z) = \sum_{k''} \hat c_{k''}(t)\chi_{k''}(\z,t),
\end{equation}
where the field operator for the $A$-species particles $\hat{\mathbf \Psi}_A(\x)$
was first introduced
and expanded in (\ref{field}).
Note that each species can have
a different spin,
hence the explicit three distinct coordinates
$\x$, $\y$ and $\z$.
Field operators of distinct particles (can be chosen to) commute.
Our starting point is the many-body Hamiltonian
of the most general 3-species mixture with up to 3-body
interactions:
\begin{equation}n\label{ham_3mix_general}
& & \hat H^{(ABC)} = \hat H^{(A)} + \hat H^{(B)} + \hat H^{(C)} + \hat W^{(AB)} + \hat W^{(AC)} + \hat W^{(BC)} + \\
& & + \hat U^{(AAB)} + \hat U^{(ABB)} + \hat U^{(AAC)} + \hat U^{(ACC)}
+ \hat U^{(BBC)} + \hat U^{(BCC)} + \hat U^{(ABC)}. \nonumber \
\end{equation}n
Here, $\hat H^{(A)}$, $\hat H^{(B)}$ and $\hat H^{(C)}$
are the single-species Hamiltonians that can be read of (\ref{ham}).
The inter-species two-body interaction parts are given by:
\begin{equation}n\label{2_body_forces}
& & \hat W^{(AB)} = \int \!\! \int d{\bf x} d{\bf y}\hat{\mathbf \Psi}^\dag_A(\x) \hat{\mathbf \Psi}^\dag_B(\y)
\hat W^{(AB)}(\x,\y) \hat{\mathbf \Psi}_B(\y) \hat{\mathbf \Psi}_A(\x), \nonumber \\
& & \hat W^{(AC)} = \int \!\! \int d{\bf x}d{\bf z}\hat{\mathbf \Psi}^\dag_A(\x) \hat{\mathbf \Psi}^\dag_C(\z)
\hat W^{(AC)}(\x,\z) \hat{\mathbf \Psi}_C(\z) \hat{\mathbf \Psi}_A(\x), \nonumber \\
& & \hat W^{(BC)} = \int \!\! \int d{\bf y}d{\bf z}\hat{\mathbf \Psi}^\dag_B(\y) \hat{\mathbf \Psi}^\dag_C(\z)
\hat W^{(BC)}(\y,\z) \hat{\mathbf \Psi}_C(\z) \hat{\mathbf \Psi}_B(\y). \
\end{equation}n
The
inter-species three-body interaction parts,
resulting from the force between two identical particles and a
third distinct particle,
are given by:
\begin{equation}n\label{binary_3_body_forces}
\hat U^{(AAB)} &=& \frac{1}{2} \int \!\! \int \!\! \int \!\! d{\bf x}d\x' d{\bf y}\hat{\mathbf \Psi}^\dag_A(\x)
\hat{\mathbf \Psi}^\dag_A(\x') \hat{\mathbf \Psi}^\dag_B(\y) \hat U^{(AAB)}(\x,\x',\y) \times \nonumber \\
& & \times \hat{\mathbf \Psi}_B(\y) \hat{\mathbf \Psi}_A(\x') \hat{\mathbf \Psi}_A(\x), \nonumber \\
\hat U^{(ABB)} &=& \frac{1}{2} \int \!\! \int \!\! \int \!\! d{\bf x}d{\bf y}d\y' \hat{\mathbf \Psi}^\dag_A(\x)
\hat{\mathbf \Psi}^\dag_B(\y) \hat{\mathbf \Psi}^\dag_B(\y') \hat U^{(ABB)}(\x,\y,\y') \times \nonumber \\
& & \times \hat{\mathbf \Psi}_B(\y') \hat{\mathbf \Psi}_B(\y) \hat{\mathbf \Psi}_A(\x), \nonumber \\
\hat U^{(AAC)} &=& \frac{1}{2} \int \!\! \int \!\! \int \!\! d{\bf x}d\x' d{\bf z}\hat{\mathbf \Psi}^\dag_A(\x)
\hat{\mathbf \Psi}^\dag_A(\x') \hat{\mathbf \Psi}^\dag_C(\z) \hat U^{(AAC)}(\x,\x',\z) \times \nonumber \\
& & \times \hat{\mathbf \Psi}_C(\z) \hat{\mathbf \Psi}_A(\x') \hat{\mathbf \Psi}_A(\x), \nonumber \\
\hat U^{(ACC)} &=& \frac{1}{2} \int \!\! \int \!\! \int \!\! d{\bf x}d{\bf z}d\z' \hat{\mathbf \Psi}^\dag_A(\x)
\hat{\mathbf \Psi}^\dag_C(\z) \hat{\mathbf \Psi}^\dag_C(\z') \hat U^{(ACC)}(\x,\z,\z') \times \nonumber \\
& & \times \hat{\mathbf \Psi}_C(\z') \hat{\mathbf \Psi}_C(\z) \hat{\mathbf \Psi}_A(\x), \nonumber \\
\hat U^{(BBC)} &=& \frac{1}{2} \int \!\! \int \!\! \int \!\! d{\bf y}d\y' d{\bf z}\hat{\mathbf \Psi}^\dag_B(\y)
\hat{\mathbf \Psi}^\dag_B(\y') \hat{\mathbf \Psi}^\dag_C(\z) \hat U^{(BBC)}(\y,\y',\z) \times \nonumber \\
& & \times \hat{\mathbf \Psi}_C(\z) \hat{\mathbf \Psi}_B(\y') \hat{\mathbf \Psi}_B(\y), \nonumber \\
\hat U^{(BCC)} &=& \frac{1}{2} \int \!\! \int \!\! \int \!\! d{\bf y}d{\bf z}d\z' \hat{\mathbf \Psi}^\dag_B(\y)
\hat{\mathbf \Psi}^\dag_C(\z) \hat{\mathbf \Psi}^\dag_C(\z') \hat U^{(BCC)}(\y,\z,\z') \times \nonumber \\
& & \times \hat{\mathbf \Psi}_C(\z') \hat{\mathbf \Psi}_C(\z) \hat{\mathbf \Psi}_B(\y).
\end{equation}n
Finally, the inter-species three-body interaction part,
resulting from the force between three different particles
is given by:
\begin{equation}n\label{3_body_forces}
\hat U^{(ABC)} &=& \int \!\! \int \!\! \int \!\! d{\bf x}d{\bf y}d{\bf z}\hat{\mathbf \Psi}^\dag_A(\x)
\hat{\mathbf \Psi}^\dag_B(\y) \hat{\mathbf \Psi}^\dag_C(\z) \hat U^{(ABC)}(\x,\y,\z) \times \nonumber \\
& & \times \hat{\mathbf \Psi}_C(\z) \hat{\mathbf \Psi}_B(\y) \hat{\mathbf \Psi}_A(\x).
\end{equation}n
When all the above are combined,
i.e., the field operators
$\hat{\mathbf \Psi}_A(\x)$,
$\hat{\mathbf \Psi}_B(\y)$
and
$\hat{\mathbf \Psi}_B(\z)$
substituted into the various interaction terms,
we find the following second-quantized expression for the mixture's Hamiltonian:
\begin{equation}n\label{ham_mix_2nd}
& & \hat H^{(ABC)} =
\sum_{k,q} h^{(A)}_{kq} \hat \rho^{(A)}_{kq}
+ \frac{1}{2} \sum_{k,s,q,l} W^{(A)}_{ksql} \hat \rho^{(A)}_{kslq}
+ \frac{1}{6}\sum_{k,s,p,r,l,q} U^{(A)}_{kspqlr} \hat \rho^{(A)}_{ksprlq} + \nonumber \\
& & + \sum_{k',q'} h^{(B)}_{k'q'} \hat \rho^{(B)}_{k'q'}
+ \frac{1}{2} \sum_{k',s',q',l'} W^{(B)}_{k's'q'l'} \hat \rho^{(B)}_{k's'l'q'}
+ \frac{1}{6}\sum_{k',s',p',r',l',q'} U^{(B)}_{k's'p'q'l'r'} \hat \rho^{(B)}_{k's'p'r'l'q'} + \nonumber \\
& & + \sum_{k'',q''} h^{(C)}_{k''q''} \hat \rho^{(C)}_{k''q''}
+ \frac{1}{2} \sum_{k'',s'',q'',l''} W^{(C)}_{k''s''q''l''} \hat \rho^{(C)}_{k''s''l''q''} + \nonumber \\
& &
+ \frac{1}{6}\sum_{k'',s'',p'',r'',l'',q''} U^{(C)}_{k''s''p''q''l''r''}
\hat \rho^{(C)}_{k''s''p''r''l''q''} + \nonumber \\
& & + \sum_{k,k',q,q'} W^{(AB)}_{kk'qq'} \hat\rho^{(A)}_{kq} \hat\rho^{(B)}_{k'q'}
+ \sum_{k,k'',q,q''} W^{(AC)}_{kk''qq''} \hat\rho^{(A)}_{kq} \hat\rho^{(C)}_{k''q''}
+ \sum_{k',k'',q',q''} W^{(BC)}_{k'k''q'q''} \hat\rho^{(B)}_{k'q'} \hat\rho^{(C)}_{k''q''} + \nonumber \\
& & + \frac{1}{2} \sum_{k,k',s,q,q',l} U^{(AAB)}_{kk'sqq'l} \hat\rho^{(A)}_{kslq} \hat\rho^{(B)}_{k'q'}
+ \frac{1}{2} \sum_{k,k',s',q,q',l'} U^{(ABB)}_{kk's'qq'l'} \hat\rho^{(A)}_{kq} \hat\rho^{(B)}_{k's'l'q'} +
\nonumber \\
& & + \frac{1}{2} \sum_{k,k'',s,q,q'',l} U^{(AAC)}_{kk''sqq''l} \hat\rho^{(A)}_{kslq} \hat\rho^{(C)}_{k''q''}
+ \frac{1}{2} \sum_{k,k'',s'',q,q'',l''} U^{(ACC)}_{kk''s''qq''l''}
\hat\rho^{(A)}_{kq} \hat\rho^{(C)}_{k''s''l''q''} + \nonumber \\
& & + \frac{1}{2} \sum_{k',k'',s',q',q'',l'} U^{(BBC)}_{k'k''s'q'q''l'} \hat\rho^{(B)}_{k's'l'q'}
\hat\rho^{(C)}_{k''q''}
+ \frac{1}{2} \sum_{k',k'',s'',q',q'',l''} U^{(BCC)}_{k'k''s''q'q''l''}
\hat\rho^{(B)}_{k'q'} \hat\rho^{(C)}_{k''s''l''q''} + \nonumber \\
& & + \sum_{k,k',k'',q,q',q''} U^{(ABC)}_{kk'k''qq'q''} \hat\rho^{(A)}_{kq} \hat\rho^{(B)}_{k'q'}
\hat\rho^{(C)}_{k''q''}.
\end{equation}n
$\hat H^{(ABC)}$
governs
the non-equilibrium dynamics (and statics)
of the mixture,
and the
most efficient way to treat
this dynamics is by
specifying the MCTDH method for the mixture,
making use of
the building bricks
of the previous section \ref{SEC2}.
We see in (\ref{ham_mix_2nd})
two kinds of ingredients.
First, there are matrix elements (integrals)
of the various interaction terms with respect to the orbitals.
For the flow of exposition and for completeness,
we list them in Appendix \ref{appendix_C}.
Second,
there are various density operators in $\hat H^{(ABC)}$.
The $B$ and $C$ intra-species density operators
can be read directly from Eqs.~(\ref{density_oper_1B},\ref{density_oper_2B_3B}),
when replacing therein the $A$-species quantities.
The inter-species density operators in (\ref{ham_mix_2nd})
can all be represented as
appropriate products of
the one-body density operators:
$\left\{\hat \rho^{(A)}_{kq}\right\}$,
$\left\{\hat \rho^{(B)}_{k'q'}\right\}$
and
$\left\{\hat \rho^{(C)}_{k''q''}\right\}$.
These are the (basic) building bricks
of our theory for mixtures.
But how to operate with them
on many-particle wave-functions of mixtures?
The multiconfigurational ansatz for a
mixture of three kinds of identical particles now
takes on the from:
\begin{equation}\label{3Mix_ansatz}
\left|\Psi^{(ABC)}(t)\right> =
\sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1}
\sum^{N^{(B)}_{\mathit{conf}}}_{J_B=1}
\sum^{N^{(C)}_{\mathit{conf}}}_{J_C=1}
C_{J_A,J_B,J_C}(t) \left|J_A,J_B,J_C;t\right>,
\end{equation}
where we denote hereafter $\vec J = (J_A, J_B, J_C)$ for brevity,
such that $C_{\vec J}(t) \equiv C_{J_A,J_B,J_C}(t)$,
$\left|\vec J;t\right> \equiv \left|J_A,J_B,J_C;t\right>$
and
$\sum_{\{\vec J\}} \equiv
\sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1}
\sum^{N^{(B)}_{\mathit{conf}}}_{J_B=1}
\sum^{N^{(C)}_{\mathit{conf}}}_{J_C=1}$.
| 3,808 | 51,553 |
en
|
train
|
0.176.8
|
& & + \sum_{k,k',q,q'} W^{(AB)}_{kk'qq'} \hat\rho^{(A)}_{kq} \hat\rho^{(B)}_{k'q'}
+ \sum_{k,k'',q,q''} W^{(AC)}_{kk''qq''} \hat\rho^{(A)}_{kq} \hat\rho^{(C)}_{k''q''}
+ \sum_{k',k'',q',q''} W^{(BC)}_{k'k''q'q''} \hat\rho^{(B)}_{k'q'} \hat\rho^{(C)}_{k''q''} + \nonumber \\
& & + \frac{1}{2} \sum_{k,k',s,q,q',l} U^{(AAB)}_{kk'sqq'l} \hat\rho^{(A)}_{kslq} \hat\rho^{(B)}_{k'q'}
+ \frac{1}{2} \sum_{k,k',s',q,q',l'} U^{(ABB)}_{kk's'qq'l'} \hat\rho^{(A)}_{kq} \hat\rho^{(B)}_{k's'l'q'} +
\nonumber \\
& & + \frac{1}{2} \sum_{k,k'',s,q,q'',l} U^{(AAC)}_{kk''sqq''l} \hat\rho^{(A)}_{kslq} \hat\rho^{(C)}_{k''q''}
+ \frac{1}{2} \sum_{k,k'',s'',q,q'',l''} U^{(ACC)}_{kk''s''qq''l''}
\hat\rho^{(A)}_{kq} \hat\rho^{(C)}_{k''s''l''q''} + \nonumber \\
& & + \frac{1}{2} \sum_{k',k'',s',q',q'',l'} U^{(BBC)}_{k'k''s'q'q''l'} \hat\rho^{(B)}_{k's'l'q'}
\hat\rho^{(C)}_{k''q''}
+ \frac{1}{2} \sum_{k',k'',s'',q',q'',l''} U^{(BCC)}_{k'k''s''q'q''l''}
\hat\rho^{(B)}_{k'q'} \hat\rho^{(C)}_{k''s''l''q''} + \nonumber \\
& & + \sum_{k,k',k'',q,q',q''} U^{(ABC)}_{kk'k''qq'q''} \hat\rho^{(A)}_{kq} \hat\rho^{(B)}_{k'q'}
\hat\rho^{(C)}_{k''q''}.
\end{equation}n
$\hat H^{(ABC)}$
governs
the non-equilibrium dynamics (and statics)
of the mixture,
and the
most efficient way to treat
this dynamics is by
specifying the MCTDH method for the mixture,
making use of
the building bricks
of the previous section \ref{SEC2}.
We see in (\ref{ham_mix_2nd})
two kinds of ingredients.
First, there are matrix elements (integrals)
of the various interaction terms with respect to the orbitals.
For the flow of exposition and for completeness,
we list them in Appendix \ref{appendix_C}.
Second,
there are various density operators in $\hat H^{(ABC)}$.
The $B$ and $C$ intra-species density operators
can be read directly from Eqs.~(\ref{density_oper_1B},\ref{density_oper_2B_3B}),
when replacing therein the $A$-species quantities.
The inter-species density operators in (\ref{ham_mix_2nd})
can all be represented as
appropriate products of
the one-body density operators:
$\left\{\hat \rho^{(A)}_{kq}\right\}$,
$\left\{\hat \rho^{(B)}_{k'q'}\right\}$
and
$\left\{\hat \rho^{(C)}_{k''q''}\right\}$.
These are the (basic) building bricks
of our theory for mixtures.
But how to operate with them
on many-particle wave-functions of mixtures?
The multiconfigurational ansatz for a
mixture of three kinds of identical particles now
takes on the from:
\begin{equation}\label{3Mix_ansatz}
\left|\Psi^{(ABC)}(t)\right> =
\sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1}
\sum^{N^{(B)}_{\mathit{conf}}}_{J_B=1}
\sum^{N^{(C)}_{\mathit{conf}}}_{J_C=1}
C_{J_A,J_B,J_C}(t) \left|J_A,J_B,J_C;t\right>,
\end{equation}
where we denote hereafter $\vec J = (J_A, J_B, J_C)$ for brevity,
such that $C_{\vec J}(t) \equiv C_{J_A,J_B,J_C}(t)$,
$\left|\vec J;t\right> \equiv \left|J_A,J_B,J_C;t\right>$
and
$\sum_{\{\vec J\}} \equiv
\sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1}
\sum^{N^{(B)}_{\mathit{conf}}}_{J_B=1}
\sum^{N^{(C)}_{\mathit{conf}}}_{J_C=1}$.
To prescribe the action of operators on the multiconfigurational
wave-function of the mixture (\ref{3Mix_ansatz}),
all we need to know is how the density operators
operate on $\left|\Psi^{(ABC)}(t)\right>$.
The operation
of the basic, one-body density operators,
whether $\hat \rho^{(A)}_{kq}$,
$\hat \rho^{(B)}_{k'q'}$
or
$\hat \rho^{(C)}_{k''q''}$
can be read of directly from Eqs.~(\ref{O_den}-\ref{C_three})
and we will not repeat them here
(one needs just to replace therein $J_A$ by $\vec J$
in the overall notation,
and $M_A$ by $M_B$ or $M_C$,
when appropriate; also see \cite{mapping}).
For the inter-species two-body density operators we have:
\begin{equation}\label{2B_mix_dens_oper}
C^{\hat \rho^{(A)}_{kq}\hat \rho^{(B)}_{k'q'}}_{\vec J}(t), \qquad
C^{\hat \rho^{(A)}_{kq}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t), \qquad
C^{\hat \rho^{(B)}_{k'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t).
\end{equation}
The notation in (\ref{2B_mix_dens_oper})
is to be understood as follows:
The two one-body density operators (in each case) are written
as superscripts on the same level,
signifying that they commute one with the other;
The operation of the two one-body density operators
on $\left|\Psi^{(ABC)}(t)\right>$ is to be performed
sequentially, i.e., the first operates on
$\left|\Psi^{(ABC)}(t)\right>$ and the second
operates on the outcome.
Finally,
for the inter-species three-body
density operators we have:
\begin{equation}n\label{3B_mix_dens_oper}
& &
C^{\hat \rho^{(A)}_{kslq} \hat \rho^{(B)}_{k'q'}}_{\vec J}(t), \qquad
C^{\hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{k's'l'q'}}_{\vec J}(t), \qquad
C^{\hat \rho^{(A)}_{kslq} \hat \rho^{(C)}_{k''q''}}_{\vec J}(t), \nonumber \\
& &
C^{\hat \rho^{(A)}_{kq} \hat \rho^{(C)}_{k''s''l''q''}}_{\vec J}(t), \qquad
C^{\hat \rho^{(B)}_{k's'l'q'} \hat \rho^{(C)}_{k''q''}}_{\vec J}(t), \qquad
C^{\hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{k''s''l''q''}}_{\vec J}(t), \
\end{equation}n
where the operation of the
two-body density operators appearing in the superscripts
is further decomposed to operations of one-body density operators
on $\left|\Psi^{(ABC)}(t)\right>$
analogously to Eq.~(\ref{basic_mapping_2B_3B}) [see Appendix \ref{appendix_A}],
and
\begin{equation}\label{3B_mix_dens_oper_3}
C^{\hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{k''q''}}_{\vec J}(t).
\end{equation}
Now we are in the position to write the
action of operators on the multiconfigurational
wave-function of the mixture (\ref{3Mix_ansatz}).
This is collected for ease of reading and for completeness in
Appendix \ref{appendix_A}.
We have gathered most ingredients for the
derivation of the equations-of-motion,
which is written
down in the subsequence section \ref{SEC3.2}.
There are four possible mixtures (Fermi-Fermi-Fermi,
Bose-Fermi-Fermi, Bose-Bose-Fermi and Bose-Bose-Bose),
and the resulting MCTDH-FFF,
MCTDH-BFF, MCTDH-BBF and MCTDH-BBB
are
to be derived
and
presented in a unified manner,
in the spirit it has been done
in the single-species case \cite{book,unified} (and the previous section \ref{SEC2})
and for mixtures of two kinds of identical particles
\cite{book,MCTDHX}.
| 2,489 | 51,553 |
en
|
train
|
0.176.9
|
\subsection{Equations-of-motion utilizing
one-body density operators and Combinadic-based mapping
for mixtures}\label{SEC3.2}
The action functional of the time-dependent
many-particle Schr\"odinger equation takes on the form:
\begin{equation}n\label{func_ABC}
& & S\left[\left\{C_{\vec J}(t)\right\},\left\{\phi_k(\x,t)\right\},
\left\{\psi_{k'}(\y,t)\right\},\left\{\chi_{k''}(\z,t)\right\}\right] = \\
& &
\int dt \Bigg\{\left< \Psi^{(ABC)}(t) \left| \hat H^{(ABC)} - i\frac{\partial}{\partial t}\right| \Psi^{(ABC)}(t)\right>
- \nonumber \\
& & - \sum_{k,j}^{M_A} \mu_{kj}^{(A)}(t) \left[\left<\phi_k \left|\right.\phi_j\right> - \delta_{kj}\right]
- \sum_{k',j'}^{M_B} \mu_{k'j'}^{(B)}(t) \left[\left<\psi_{k'} \left|\right.\psi_{j'}\right> - \delta_{k'j'}\right]
- \nonumber \\
& &
- \sum_{k'',j''}^{M_C} \mu_{k''j''}^{(C)}(t) \left[\left<\chi_{k''} \left|\right.\chi_{j''}\right> -
\delta_{k''j''}\right]
- \varepsilon^{(ABC)}(t) \left[\sum_{\{\vec J\}} \left|C_{\vec J}(t)\right|^2 - 1 \right]\Bigg\}, \nonumber
\end{equation}n
where the time-dependent Lagrange multipliers
$\left\{\mu_{kj}^{(A)}(t)\right\}$,
$\left\{\mu_{k'j'}^{(B)}(t)\right\}$
and\break
$\left\{\mu_{k''j''}^{(C)}(t)\right\}$
are introduced, respectively, to ensure the orthonormality of the
$A$-, $B$- and $C$-species orbitals at all times.
Note that orbitals of distinct particles need
not be orthogonal to each other.
As for the single-species theory,
the Lagrange multiplier
$\varepsilon^{(ABC)}(t)$
is redundant in the time-dependent case
and will resurface in the static theory.
In what follows we present the main steps of the derivation.
More details and various quantities needed
for the derivation and in particular for
the implementation of the equations-of-motion are deferred to Appendix \ref{appendix_B}
and Appendix \ref{appendix_C}.
To perform the variation of the action functional (\ref{func_ABC})
with respect to the coefficients,
we write the expectation value of $\hat H^{(ABC)}$ with respect to $\left|\Psi^{(ABC)}(t)\right>$
in a form which is
explicit with respect to the coefficients:
\begin{equation}n\label{expectation_H_ABC_C}
& & \left<\Psi^{(ABC)}(t)\left| \hat H^{(ABC)} - i\frac{\partial}{\partial t} \right|\Psi^{(ABC)}(t)\right> =
\nonumber \\
& & \qquad = \sum_{\{\vec J\}}
C^\ast_{\vec J}(t)
\left[ C^{\hat H^{(ABC)} - i\frac{\partial}{\partial t}^{(A)} - i\frac{\partial}{\partial t}^{(B)}
- i\frac{\partial}{\partial t}^{(C)}}_{\vec J}\!\!(t) - i \dot C_{\vec J}(t) \right].
\end{equation}n
The three time-derivative operators
$i\frac{\partial}{\partial t}^{(A)}$, $i\frac{\partial}{\partial t}^{(B)}$ and
$i\frac{\partial}{\partial t}^{(C)}$ make it clear that to each species
there is associated a different one-body operator
representing the derivative of orbitals in time.
Performing the variation of
$ S\left[\left\{C_{\vec J}(t)\right\},\left\{\phi_k(\x,t)\right\},
\left\{\psi_{k'}(\y,t)\right\},\left\{\chi_{k''}(\z,t)\right\}\right]$
with respect to the expansion coefficients
$\left\{ C^\ast_{\vec J}(t)\right\}$,
we then make use of the differential conditions for the orbitals of each species,
\begin{equation}n\label{diff_con_BC}
& & \left\{i\frac{\partial}{\partial t}^{(B)}\right\}_{k'q'} \equiv
i\left<\psi_{k'} \left|\dot\psi_{q'}\right>\right. = 0, \ \ k',q'=1,\ldots,M_B, \nonumber \\
& & \left\{i\frac{\partial}{\partial t}^{(C)}\right\}_{k''q''} \equiv
i\left<\chi_{k''} \left|\dot\chi_{q''}\right>\right. = 0, \ \ k'',q''=1,\ldots,M_C,
\end{equation}n
where the differential conditions with respect to
the $A$-spices orbitals have been introduced in (\ref{diff_con_A}).
This leads to the final result for the
equations-of-motion for the expansion
coefficients:
\begin{equation}n\label{C_MIX_gen_phi_phidot}
& & C^{\hat H^{(ABC)}}_{\vec J}(t) = i \dot C_{\vec J}(t), \qquad \forall \vec J, \nonumber \\
& & C^{\hat H^{(ABC)}}_{\vec J}(t) =
C^{\hat H^{(A)}}_{\vec J}(t) + C^{\hat H^{(B)}}_{\vec J}(t) + C^{\hat H^{(C)}}_{\vec J}(t) + \nonumber \\
& & + C^{\hat W^{(AB)}}_{\vec J}(t) + C^{\hat W^{(AC)}}_{\vec J}(t) + C^{\hat W^{(BC)}}_{\vec J}(t) + \nonumber \\
& & + C^{\hat U^{(AAB)}}_{\vec J}(t) + C^{\hat U^{(ABB)}}_{\vec J}(t) + C^{\hat U^{(AAC)}}_{\vec J}(t) +
C^{\hat U^{(ACC)}}_{\vec J}(t) + \nonumber \\
& & + C^{\hat U^{(BBC)}}_{\vec J}(t) + C^{\hat U^{(BCC)}}_{\vec J}(t) + C^{\hat U^{(ABC)}}_{\vec J}(t). \
\end{equation}n
We remark that other forms of the
differential conditions (\ref{diff_con_A},\ref{diff_con_BC})
can be used,
in particular,
each species can have a different form depending on the physical
problem at hand and on numerical needs.
Let us
move to the equations-of-motion
for the orbitals
$\left\{\phi_k(\x,t)\right\}$,
$\left\{\psi_{k'}(\y,t)\right\}$
and
$\left\{\chi_{k''}(\z,t)\right\}$.
For this,
we express the expectation value\break
$\left<\Psi^{(ABC)}\left|\hat H^{(ABC)} - i\frac{\partial}{\partial t}\right|\Psi^{(ABC)}\right>$
in a form which explicitly depends
on the various
integrals with respect to the orbitals.
The result is lengthly and posted
in Appendix \ref{appendix_C}.
In particular,
the expectation values of the various density
operators in $\hat H^{(ABC)}$ [Eq.~(\ref{ham_mix_2nd})]
emerge
as matrix elements of
the different intra-species and inter-species
reduced density matrices.
For ease of reading and for completeness,
we collect in Appendix \ref{appendix_B}
all reduced density matrices
and their respective matrix elements needed in the theory
and its numerical implementation.
We can now proceed and perform the variation
of the action functional (\ref{func_ABC}) with respect to the orbitals.
Performing the variation
with respect to
$\left\{\phi^\ast_k(\x,t)\right\}$,
$\left\{\psi^\ast_{k'}(\y,t)\right\}$
and
$\left\{\chi^\ast_{k''}(\z,t)\right\}$,
making use of the orthonormality relations
of each species' orbitals,
we solve for the Lagrange multipliers,
$k,j=1,\ldots,M_A$,
$k',j'=1,\ldots,M_B$
and
$k'',j''=1,\ldots,M_C$:
\begin{equation}n\label{LM_A_B_C}
& & \mu_{kj}^{(A)}(t) =
\left<\phi_j\left|
\sum^{M_A}_{q=1} \left( \rho^{(A)}_{kq} \left[ \hat h^{(A)}
- i\frac{\partial}{\partial t}^{(A)} \right] +
\{\rho_2 \hat W\}^{(A)}_{kq} + \{\rho_3 \hat U\}^{(A)}_{kq}
\right) \right|\phi_q\right>, \ \\
& & \mu_{k'j'}^{(B)}(t) =
\left<\psi_{j'}\left|
\sum^{M_B}_{q'=1} \left( \rho^{(B)}_{k'q'} \left[ \hat h^{(B)}
- i\frac{\partial}{\partial t}^{(B)} \right] +
\{\rho_2 \hat W\}^{(B)}_{k'q'} + \{\rho_3 \hat U\}^{(B)}_{k'q'}
\right) \right|\psi_{q'}\right>, \nonumber \\
& & \mu_{k''j''}^{(C)}(t) =
\left<\chi_{j''}\left|
\sum^{M_C}_{q''=1} \left( \rho^{(C)}_{k''q''} \left[ \hat h^{(C)}
- i\frac{\partial}{\partial t}^{(C)} \right] +
\{\rho_2 \hat W\}^{(C)}_{k''q''} + \{\rho_3 \hat U\}^{(C)}_{k''q''}
\right) \right|\chi_{q''}\right>. \nonumber \
\end{equation}n
The terms appearing in the Lagrange
multipliers are all defined in Appendix \ref{appendix_C}.
We discuss them below,
after we arrive at the final form of the equations-of-motion
for the orbitals.
To proceed we introduce
the projection operators for the mixture:
\begin{equation}\label{project_BC}
\hat {\mathbf P}^{(B)} = 1 - \sum_{u'=1}^{M_B} \left|\psi_{u'}\right>\left<\psi_{u'}\right|, \qquad
\hat {\mathbf P}^{(C)} = 1 - \sum_{u''=1}^{M_C} \left|\chi_{u''}\right>\left<\chi_{u''}\right|,
\end{equation}
where the projection operator
for the $A$-species orbitals $\hat {\mathbf P}^{(A)}$
was defined in (\ref{project_A}).
Now,
eliminating the Lagrange multipliers (\ref{LM_A_B_C})
and making
use of the differential conditions
for each species (\ref{diff_con_A},\ref{diff_con_BC}),
we
obtain the final form
of the equations-of-motion
of the orbitals of the mixture,
$j=1,\ldots,M_A$,
$j'=1,\ldots,M_B$ and
$j''=1,\ldots,M_C$:
\begin{equation}n\label{EOM_final_orbitals_3mix}
& & \!\!\!\!\!\!\!\! i\left|\dot\phi_j\right> = \hat {\mathbf P}^{(A)}
\left[\hat h^{(A)} \left|\phi_j\right> +
\sum^{M_A}_{k,q=1} \left\{\brho^{(A)}(t)\right\}^{-1}_{jk}
\bigg( \{\rho_2 \hat W\}^{(A)}_{kq} + \{\rho_3 \hat U\}^{(A)}_{kq} \bigg)
\left|\phi_q\right> \right], \\
& & \!\!\!\!\!\!\!\! i\left|\dot\psi_{j'}\right> = \hat {\mathbf P}^{(B)}
\left[\hat h^{(B)} \left|\psi_{j'}\right> +
\sum^{M_B}_{k',q'=1} \left\{\brho^{(B)}(t)\right\}^{-1}_{j'k'}
\bigg( \{\rho_2 \hat W\}^{(B)}_{k'q'} + \{\rho_3 \hat U\}^{(B)}_{k'q'} \bigg)
\left|\psi_{q'}\right> \right], \nonumber \\
& & \!\!\!\!\!\!\!\! i\left|\dot\chi_{j''}\right> = \hat {\mathbf P}^{(C)}
\left[\hat h^{(C)} \left|\chi_{j''}\right> +
\sum^{M_C}_{k'',q''=1} \left\{\brho^{(C)}(t)\right\}^{-1}_{j''k''}
\bigg( \{\rho_2 \hat W\}^{(C)}_{k''q''} + \{\rho_3 \hat U\}^{(C)}_{k''q''} \bigg)
\left|\chi_{q''}\right> \right]. \nonumber \
\end{equation}n
We see the appealing structure of the equations-of-motion
for the orbitals.
The various one-body operators which assemble the contributions from different orders of the
interactions, corresponding to the one-, two- and three-body parts of
the many-particle Hamiltonian $\hat H^{(ABC)}$ (\ref{ham_3mix_general}),
are separated.
Moreover,
it is seen that each one-body operator is comprised
of
products of reduced density matrices of increasing order
times one-body potentials resulting from
interactions of the same order (see Appendix \ref{appendix_C}
for the explicit terms).
This separation,
originally put forward for the
first time
in this context for the single-species static theory for bosons MCHB \cite{MCHB},
is not
only theoretically appealing,
but is expected to
make the implementation of the theory
in case
of higher-body
forces
further
efficient.
Equations-of-motion (\ref{EOM_final_orbitals_3mix}) for the
orbitals together with (\ref{C_MIX_gen_phi_phidot})
for the expansion coefficients constitute
the
propagation theory for mixtures of three kinds
of identical particles,
interacting with all possible interactions
up to three-body forces.
All four possible mixtures (Fermi-Fermi-Fermi,
Bose-Fermi-Fermi, Bose-Bose-Fermi and Bose-Bose-Bose)
are presented in a unified manner,
the respective acronyms are denoted
as MCTDH-FFF, MCTDH-BFF, MCTDH-BBF and
MCTDH-BBB.
| 3,771 | 51,553 |
en
|
train
|
0.176.10
|
To proceed we introduce
the projection operators for the mixture:
\begin{equation}\label{project_BC}
\hat {\mathbf P}^{(B)} = 1 - \sum_{u'=1}^{M_B} \left|\psi_{u'}\right>\left<\psi_{u'}\right|, \qquad
\hat {\mathbf P}^{(C)} = 1 - \sum_{u''=1}^{M_C} \left|\chi_{u''}\right>\left<\chi_{u''}\right|,
\end{equation}
where the projection operator
for the $A$-species orbitals $\hat {\mathbf P}^{(A)}$
was defined in (\ref{project_A}).
Now,
eliminating the Lagrange multipliers (\ref{LM_A_B_C})
and making
use of the differential conditions
for each species (\ref{diff_con_A},\ref{diff_con_BC}),
we
obtain the final form
of the equations-of-motion
of the orbitals of the mixture,
$j=1,\ldots,M_A$,
$j'=1,\ldots,M_B$ and
$j''=1,\ldots,M_C$:
\begin{equation}n\label{EOM_final_orbitals_3mix}
& & \!\!\!\!\!\!\!\! i\left|\dot\phi_j\right> = \hat {\mathbf P}^{(A)}
\left[\hat h^{(A)} \left|\phi_j\right> +
\sum^{M_A}_{k,q=1} \left\{\brho^{(A)}(t)\right\}^{-1}_{jk}
\bigg( \{\rho_2 \hat W\}^{(A)}_{kq} + \{\rho_3 \hat U\}^{(A)}_{kq} \bigg)
\left|\phi_q\right> \right], \\
& & \!\!\!\!\!\!\!\! i\left|\dot\psi_{j'}\right> = \hat {\mathbf P}^{(B)}
\left[\hat h^{(B)} \left|\psi_{j'}\right> +
\sum^{M_B}_{k',q'=1} \left\{\brho^{(B)}(t)\right\}^{-1}_{j'k'}
\bigg( \{\rho_2 \hat W\}^{(B)}_{k'q'} + \{\rho_3 \hat U\}^{(B)}_{k'q'} \bigg)
\left|\psi_{q'}\right> \right], \nonumber \\
& & \!\!\!\!\!\!\!\! i\left|\dot\chi_{j''}\right> = \hat {\mathbf P}^{(C)}
\left[\hat h^{(C)} \left|\chi_{j''}\right> +
\sum^{M_C}_{k'',q''=1} \left\{\brho^{(C)}(t)\right\}^{-1}_{j''k''}
\bigg( \{\rho_2 \hat W\}^{(C)}_{k''q''} + \{\rho_3 \hat U\}^{(C)}_{k''q''} \bigg)
\left|\chi_{q''}\right> \right]. \nonumber \
\end{equation}n
We see the appealing structure of the equations-of-motion
for the orbitals.
The various one-body operators which assemble the contributions from different orders of the
interactions, corresponding to the one-, two- and three-body parts of
the many-particle Hamiltonian $\hat H^{(ABC)}$ (\ref{ham_3mix_general}),
are separated.
Moreover,
it is seen that each one-body operator is comprised
of
products of reduced density matrices of increasing order
times one-body potentials resulting from
interactions of the same order (see Appendix \ref{appendix_C}
for the explicit terms).
This separation,
originally put forward for the
first time
in this context for the single-species static theory for bosons MCHB \cite{MCHB},
is not
only theoretically appealing,
but is expected to
make the implementation of the theory
in case
of higher-body
forces
further
efficient.
Equations-of-motion (\ref{EOM_final_orbitals_3mix}) for the
orbitals together with (\ref{C_MIX_gen_phi_phidot})
for the expansion coefficients constitute
the
propagation theory for mixtures of three kinds
of identical particles,
interacting with all possible interactions
up to three-body forces.
All four possible mixtures (Fermi-Fermi-Fermi,
Bose-Fermi-Fermi, Bose-Bose-Fermi and Bose-Bose-Bose)
are presented in a unified manner,
the respective acronyms are denoted
as MCTDH-FFF, MCTDH-BFF, MCTDH-BBF and
MCTDH-BBB.
To conclude our work,
we note that one
can compute with imaginary time propagation
for time-independent
Hamiltonians self-consistent ground
and excited states for
3-species mixtures.
Substituting
$t \to -it$ into the equations-of-motion
for the coefficients and orbitals, Eqs.~(\ref{C_MIX_gen_phi_phidot},\ref{EOM_final_orbitals_3mix}),
the final time-independent (static)
theory reads,
$k=1,\ldots,M_A$,
$k'=1,\ldots,M_B$ and
$k''=1,\ldots,M_C$:
\begin{equation}n\label{statical_3mix}
& &
\sum_{q=1}^{M_A} \left[ \rho^{(A)}_{kq} \hat h^{(A)} +
\{\rho_2 \hat W\}^{(A)}_{kq} + \{\rho_3 \hat U\}^{(A)}_{kq}
\right] \left|\phi_q\right> =
\sum_{j=1}^{M_A} \mu_{kj}^{(A)} \left|\phi_j\right>, \nonumber \\
& &
\sum_{q'=1}^{M_B} \left[ \rho^{(B)}_{k'q'} \hat h^{(B)} +
\{\rho_2 \hat W\}^{(B)}_{k'q'} + \{\rho_3 \hat U\}^{(B)}_{k'q'}
\right] \left|\psi_{q'}\right> =
\sum_{j'=1}^{M_B} \mu_{k'j'}^{(B)} \left|\psi_{j'}\right>, \nonumber \\
& &
\sum_{q''=1}^{M_C} \left[ \rho^{(C)}_{k''q''} \hat h^{(C)} +
\{\rho_2 \hat W\}^{(C)}_{k''q''} + \{\rho_3 \hat U\}^{(C)}_{k''q''}
\right] \left|\chi_{q''}\right> =
\sum_{j''=1}^{M_C} \mu_{k''j''}^{(C)} \left|\chi_{j''}\right>, \nonumber \\
& &
\qquad \qquad
C^{\hat H^{(ABC)}}_{\vec J} = \varepsilon^{(ABC)} C_{\vec J}, \qquad \forall \vec J, \
\end{equation}n
where, making use of the normalization of the static
many-particle wave-function $\left|\Psi^{(ABC)}\right>$,
$\varepsilon^{(ABC)}= \sum_{\vec J} C^\ast_{\vec J} C^{\hat H^{(ABC)}}_{\vec J}$
is the eigen-energy of the system.
Finally,
utilizing
the fact that the matrices of Lagrange multipliers
$\{\mu_{kj}^{(A)}\}$, $\{\mu_{k'j'}^{(B)}\}$ and $\{\mu_{k''j''}^{(C)}\}$
are Hermitian (for stationary states)
and of the
invariance property of the multiconfigurational wave-function
(to unitary transformations of each species'
orbitals compensated by the `reverse' transformations
of the coefficients),
one can transform Eq.~(\ref{statical_3mix})
to a representation where
$\{\mu_{kj}^{(A)}\}$, $\{\mu_{k'j'}^{(B)}\}$ and $\{\mu_{k''j''}^{(C)}\}$
are diagonal matrices.
This concludes our
derivations.
| 2,031 | 51,553 |
en
|
train
|
0.176.11
|
\section{Brief summary and outlook}\label{SEC4}
In the present work we have specified the MCTDH
method for a new complicated system of relevance.
We have considered mixtures
of three kinds of identical particles
interacting via all combinations of two- and three-body forces.
We have derived the equations-of-motion for
the expansion coefficients, $\left\{C_{\vec J}(t)\right\}$,
and
the orbitals,
$\left\{\phi_k(\x,t)\right\}$,
$\left\{\psi_{k'}(\y,t)\right\}$
and
$\left\{\chi_{k''}(\z,t)\right\}$,
see Eqs.~(\ref{C_MIX_gen_phi_phidot},\ref{EOM_final_orbitals_3mix}).
The self-consistent
static theory has
been derived as well,
see Eq.~(\ref{statical_3mix}).
All quantities needed for the implementation
of the theory have been prescribed in details.
On the methodological level,
we have represented the
coefficients' part of the equations-of-motion
in a compact recursive form
in terms of one-body density operators only,
$\left\{\hat \rho^{(A)}_{kq}\right\}$,
$\left\{\hat \rho^{(B)}_{k'q'}\right\}$
and
$\left\{\hat \rho^{(C)}_{k''q''}\right\}$.
The recursion utilizes the recently proposed
Combinadic-based mapping for fermionic and bosonic operators in Fock space \cite{mapping}
that has been
successfully applied and implemented within the MCTDHB package \cite{package}.
Our derivation sheds
new light on the
representation of the coefficients'
part in MCTDHF and MCTDHB
without resorting to the
matrix elements of the many-body Hamiltonian
with respect to the time-dependent configurations,
and suggests a recipe for
efficient implementation of
MCTDH-FFF, MCTDH-BFF, MCTDH-BBF and
MCTDH-BBB
which is well-suitable
for parallel implementation.
As an outlook of the present theory,
let us imagine the possibility of conversion between the distinct particles,
say the conversion of the $A$ and $B$ species
to the $C$ species,
which can be written symbolically as the following ``reaction'':
$$
A + B \leftrightharpoons C.
$$
Such a process would be a model, e.g.,
for the resonant association
of hetero-nuclear ultra-cold molecules.
The derivation of an efficient MCTDH-{\it conversion} theory
in this case
would require the extension of the
Combinadic-based
mapping \cite{mapping}
to systems with particle conversion,
and the assembly of more building bricks
than just the one-body density operators used in the
present theory,
$\left\{\hat \rho^{(A)}_{kq}\right\}$,
$\left\{\hat \rho^{(B)}_{k'q'}\right\}$
and
$\left\{\hat \rho^{(C)}_{k''q''}\right\}$.
\section*{Acknowledgments}
The paper is dedicated
to Professor Debashis Mukherjee,
a dear colleague and friend,
on the occasion of his 65{\it th} birthday.
We are grateful to Hans-Dieter Meyer for multiple and continuous discussions on MCTDH,
and acknowledge financial support by the DFG.
\appendix
\section{Calculating expectation values
of operators in mixtures of three kinds of identical particles}\label{appendix_A}
Following \cite{mapping},
we write the general expectation value of an operator
$\hat O^{(3mix)}$ in a 3-species mixture as follows:
\begin{equation}n\label{expectation_3}
& & \left<\Psi^{(ABC)}(t)\left| \hat O^{(3mix)} \right|\Psi^{(ABC)}(t)\right> =
\left<\Psi^{(ABC)}(t)\left| \left\{ \hat O^{(3mix)} \right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum_{{\vec J}} C^\ast_{\vec J}(t) C^{\hat O^{(3mix)}}_{\vec J}(t),
\end{equation}n
where
\begin{equation}\label{O_Psi_3}
\hat O^{(3mix)} \left|\Psi^{(ABC)}(t)\right> =
\hat O^{(3mix)} \sum_{{\vec J}} C_{\vec J}(t) \left|\vec J;t\right> \equiv
\sum_{{\vec J}} C^{\hat O^{(3mix)}}_{\vec J}(t) \left|\vec J;t\right>.
\end{equation}
$\hat O^{(3mix)}$ can be a
one-, two- or three-body operator or
any combination thereof.
The operation of single-species operators,
whether $\hat O^{(A)}$, $\hat O^{(B)}$
or
$\hat O^{(C)}$ can be read of directly from Eqs.~(\ref{O_den}-\ref{C_three})
and we will not repeat them here
(one needs just to replace therein $J_A$ by $\vec J$
in the overall notation,
and $M_A$ by $M_B$ or $M_C$,
when appropriate; also see \cite{mapping}).
For the inter-species two-body operators we prescribe
the compact result for completeness.
For the two-body operators
$\hat O^{(AB)} = \sum_{k,k',q,q'} O^{(AB)}_{kk'qq'} \hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{k'q'}$,\break
$\hat O^{(AC)} = \sum_{k,k'',q,q''} O^{(AC)}_{kk''qq''} \hat \rho^{(A)}_{kq} \hat \rho^{(C)}_{k''q''}$
and
$\hat O^{(BC)} = \sum_{k',k'',q',q''} O^{(BC)}_{k'k''q'q''} \hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{k''q''}$
we find:
\begin{equation}n\label{C_3mix_2B}
C^{\hat O^{(AB)}}_{\vec J}(t) &=& \sum_{k,k',q,q'=1}^{M_A,M_B} O^{(AB)}_{kk'qq'}
C^{\hat \rho^{(A)}_{kq}\hat \rho^{(B)}_{k'q'}}_{\vec J}(t), \nonumber \\
C^{\hat O^{(AC)}}_{\vec J}(t) &=& \sum_{k,k'',q,q''=1}^{M_A,M_C} O^{(AC)}_{kk''qq''}
C^{\hat \rho^{(A)}_{kq}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t), \nonumber \\
C^{\hat O^{(BC)}}_{\vec J}(t) &=& \sum_{k',k'',q',q''=1}^{M_B,M_C} O^{(BC)}_{k'k''q'q''}
C^{\hat \rho^{(B)}_{k'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t). \
\end{equation}n
Note the factorization
of the one-body (basic) density operators for
the inter-species operators,
which simplify the way
how the coefficients' vector is evaluated.
For the inter-species three-body operators
resulting from the force between two identical particles and a third distinct one
we list the final result for completeness.
For the three-body operators
\begin{equation}n\label{3B_operators}
\hat O^{(AAB)} &=& \frac{1}{2} \sum_{k,k',s,q,q'l} O^{(AAB)}_{kk'sqq'l} \hat \rho^{(A)}_{kslq} \hat \rho^{(B)}_{k'q'},
\nonumber \\
\hat O^{(ABB)} &=& \frac{1}{2} \sum_{k,k',s',q,q',l'} O^{(ABB)}_{kk's'qq'l'} \hat \rho^{(A)}_{kq}
\hat \rho^{(B)}_{k's'l'q'}, \nonumber \\
\hat O^{(AAC)} &=& \frac{1}{2} \sum_{k,k'',s,q,q''l} O^{(AAC)}_{kk''sqq''l} \hat \rho^{(A)}_{kslq}
\hat \rho^{(C)}_{k''q''}, \nonumber \\
\hat O^{(ACC)} &=& \frac{1}{2} \sum_{k,k'',s'',q,q'',l''} O^{(ACC)}_{kk''s''qq''l''} \hat \rho^{(A)}_{kq}
\hat \rho^{(C)}_{k''s''l''q''}, \nonumber \\
\hat O^{(BBC)} &=& \frac{1}{2} \sum_{k',k'',s',q',q''l'} O^{(BBC)}_{k'k''s'q'q''l'} \hat \rho^{(B)}_{k's'l'q'}
\hat \rho^{(C)}_{k''q''}, \nonumber \\
\hat O^{(BCC)} &=& \frac{1}{2} \sum_{k',k'',s'',q',q'',l''} O^{(BCC)}_{k'k''s''q'q''l''} \hat \rho^{(B)}_{k'q'}
\hat \rho^{(C)}_{k''s''l''q''}, \
\end{equation}n
we find:
\begin{equation}n\label{C_3mix_binary_3B}
& & C^{\hat O^{(AAB)}}_{\vec J}(t) = \nonumber \\
&=& \frac{1}{2} \sum_{k,k',s,q,q',l=1}^{M_A,M_B} O^{(AAB)}_{kk'sqq'l}
\left[ \pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}\hat \rho^{(B)}_{k'q'}}_{\vec J}(t)
\mp {C^{\hat \rho^{(A)}_{sq}\hat \rho^{(B)}_{k'q'}}_{\vec J}}^{\hat \rho^{(A)}_{kl}}\!(t) \right],
\nonumber \\
& & C^{\hat O^{(ABB)}}_{\vec J}(t) = \nonumber \\
&=& \frac{1}{2} \sum_{k,k',s',q,q',l'=1}^{M_A,M_B} O^{(ABB)}_{kk's'qq'l'} \left[ \pm \delta_{s'l'}
C^{\hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{k'q'}}_{\vec J}(t)
\mp {C^{\hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{s'q'}}_{\vec J}}^{\hat \rho^{(B)}_{k'l'}}\!(t) \right],
\nonumber \\
& & C^{\hat O^{(AAC)}}_{\vec J}(t) = \nonumber \\
&=& \frac{1}{2} \sum_{k,k'',s,q,q'',l=1}^{M_A,M_C} O^{(AAC)}_{kk''sqq''l}
\left[ \pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t)
\mp {C^{\hat \rho^{(A)}_{sq}\hat \rho^{(C)}_{k''q''}}_{\vec J}}^{\hat \rho^{(A)}_{kl}}\!(t) \right],
\ \\
& & C^{\hat O^{(ACC)}}_{\vec J}(t) = \nonumber \\
&=& \frac{1}{2} \sum_{k,k'',s'',q,q'',l''=1}^{M_A,M_C} O^{(ACC)}_{kk''s''qq''l''} \left[ \pm \delta_{s''l''}
C^{\hat \rho^{(A)}_{kq} \hat \rho^{(C)}_{k''q''}}_{\vec J}(t)
\mp {C^{\hat \rho^{(A)}_{kq} \hat \rho^{(C)}_{s''q''}}_{\vec J}}^{\hat \rho^{(C)}_{k''l''}}\!(t) \right],
\nonumber \\
& & C^{\hat O^{(BBC)}}_{\vec J}(t) = \nonumber \\
&=& \frac{1}{2} \sum_{k',k'',s',q',q'',l'=1}^{M_B,M_C} O^{(BBC)}_{k'k''s'q'q''l'}
\left[ \pm \delta_{s'l'} C^{\hat \rho^{(B)}_{k'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t)
\mp {C^{\hat \rho^{(B)}_{s'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}}^{\hat \rho^{(B)}_{k'l'}}\!(t) \right],
\nonumber \\
& & C^{\hat O^{(BCC)}}_{\vec J}(t) = \nonumber \\
&=& \frac{1}{2} \sum_{k',k'',s'',q',q'',l''=1}^{M_B,M_C} O^{(BCC)}_{k'k''s''q'q''l''} \left[ \pm \delta_{s''l''}
C^{\hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{k''q''}}_{\vec J}(t)
\mp {C^{\hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{s''q''}}_{\vec J}}^{\hat \rho^{(C)}_{k''l''}}\!(t) \right].
\nonumber
\end{equation}n
We remind that the appearance
of the one-body (basic) density operators
on two levels means that the lower-level
multiplication has to be performed first,
and the upper-level second.
Finally,
for the inter-species three-body operator we
give the closed-form result for completeness.
For the three-body operator\break
$\hat O^{(ABC)} = \sum_{k,k',k'',q,q',q''} O^{(ABC)}_{kk'k''qq'q''}
\hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{k''q''}$
we find:
\begin{equation}\label{C_3mix_3B}
C^{\hat O^{(ABC)}}_{\vec J}(t) = \sum_{k,k',k'',q,q',q''=1}^{M_A,M_B,M_C} O^{(ABC)}_{kk'k''qq'q''}
C^{\hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{k''q''}}_{\vec J}(t),
\end{equation}
which concludes our Combinadic-based \cite{mapping}
representation of the equations-of-motion
for the coefficients
in MCTDH for mixtures
of 3 kinds of identical particles interacting with up to 3-body forces,
and the calculations of all relevant matrix elements
with respect to $\left|\Psi^{(ABC)}(t)\right>$.
| 3,773 | 51,553 |
en
|
train
|
0.176.12
|
\section{Reduced density matrices for mixtures of three kinds
of identical particles interacting with up to three-body forces}\label{appendix_B}
\subsection*{Intra-species reduced density matrices}
The reduced one-body density matrix of the single-species multiconfigurational
wave-function $\left|\Psi^{(A)}(t)\right>$ is given by:
\begin{equation}n\label{DNS_A_1}
& & \rho^{(A)}(\x_1|\x'_1;t) = N_A \int d\x_2 d\x_3 \cdots d\x_{N_A} \\
& & = {\Psi^{(A)}}^\ast(\x'_1,\x_2,\ldots,\x_{N_A};t)
\Psi^{(A)}(\x_1,\x_2,\ldots,\x_{N_A};t) = \nonumber \\
& & = \left<\Psi^{(A)}(t)\left|\left\{\hat{\mathbf \Psi}_A^\dag(\x'_1)\hat{\mathbf \Psi}_A(\x_1)
\right|\Psi^{(A)}(t)\right>\right\} =
\sum^M_{k,q=1} \rho^{(A)}_{kq}(t) \phi^\ast_k(\x'_1,t)\phi_q(\x_1,t), \nonumber \
\end{equation}n
where its matrix elements in the orbital basis
$\rho^{(A)}_{kq}(t)$
are given in Eq.~(\ref{denisty_matrx_element}) of the main text.
Then,
the reduced two-body density matrix of the single-species multiconfigurational
wave-function $\left|\Psi^{(A)}(t)\right>$ is given by:
\begin{equation}n\label{DNS_A_2}
& & \rho^{(A)}(\x_1,\x_2|\x'_1,\x'_2;t) = N_A(N_A-1) \int d\x_3 \cdots d\x_{N_A} \times \\
& & \times {\Psi^{(A)}}^\ast(\x'_1,\x'_2,\x_3,\ldots,\x_{N_A};t) \Psi^{(A)}(\x_1,\x_2,\x_3,\ldots,\x_{N_A};t)
= \nonumber \\
& & = \left<\Psi^{(A)}(t)\left|\left\{\hat{\mathbf \Psi}_A^\dag(\x'_1)\hat{\mathbf \Psi}_A^\dag(\x'_2)
\hat{\mathbf \Psi}_A(\x_2)\hat{\mathbf \Psi}_A(\x_1)\right|\Psi^{(A)}(t)\right> \right\} = \nonumber \\
& & = \sum^M_{k,s,l,q=1} \rho^{(A)}_{kslq}(t)
\phi^\ast_k(\x'_1,t) \phi^\ast_s(\x'_2,t) \phi_l(\x_2,t) \phi_q(\x_1,t), \nonumber \
\end{equation}n
where its matrix elements in the orbital basis
$\rho^{(A)}_{kslq}(t)$
are given in Eq.~(\ref{denisty_matrx_element}).
Finally in the single-species case,
the reduced three-body density matrix of $\left|\Psi^{(A)}(t)\right>$ is given by:
\begin{equation}n\label{DNS_A_3}
& & \rho^{(A)}(\x_1,\x_2,\x_3|\x'_1,\x'_2,\x'_3;t) = N_A(N_A-1)(N_A-2) \int d\x_4 \cdots d\x_{N_A} \times \nonumber \\
& & \times {\Psi^{(A)}}^\ast(\x'_1,\x'_2,\x'_3,\x_4,\ldots,\x_{N_A};t) \Psi^{(A)} (\x_1,\x_2,\x_3,\x_4,\ldots,\x_{N_A};t)
= \\
& & = \left<\Psi^{(A)}(t)\left|\left\{\hat{\mathbf \Psi}_A^\dag(\x'_1)\hat{\mathbf \Psi}_A^\dag(\x'_2)
\hat{\mathbf \Psi}_A^\dag(\x'_3)
\hat{\mathbf \Psi}_A(\x_3)\hat{\mathbf \Psi}_A(\x_2)
\hat{\mathbf \Psi}_A(\x_1)\right|\Psi^{(A)}(t)\right>\right\} = \nonumber \\
& & = \sum^M_{k,s,p,r,l,q=1} \rho^{(A)}_{ksprlq}(t)
\phi^\ast_k(\x'_1,t) \phi^\ast_s(\x'_2,t) \phi^\ast_p(\x'_3,t) \phi_r(\x_3,t) \phi_l(\x_2,t) \phi_q(\x_1,t), \nonumber \
\end{equation}n
where its matrix elements in the orbital basis
$\rho^{(A)}_{ksprlq}(t)$
are given in Eq.~(\ref{denisty_matrx_element}).
The reduced density matrices of the $B$ and $C$ species
are defined in an analogous manner,
where $B$ and $C$ quantities are to replace
the $A$ quantities in Eqs.~(\ref{DNS_A_1}-\ref{DNS_A_3}).
\subsection*{Inter-species reduced two-body density matrices}
For completeness,
we give all inter-species
reduced density matrices that
occur in a mixture
of three kinds of identical particles
interacting
with
up to
three-body forces,
where each species may have a different spin.
There are three such reduced density matrices
which are associated with
the two-body interactions
of two
distinct particles.
\begin{equation}n\label{DNS_AB}
& & \rho^{(AB)}(\x_1,\y_1|\x'_1,\y'_1;t) = N_A N_B \int \, d\x_2 \cdots d\x_{N_A}
d\y_2 \cdots d\y_{N_B} d\z_1 \cdots d\z_{N_C} \times \nonumber \\
& & \times {\Psi^{(ABC)}}^\ast(\x'_1,\ldots,\x_{N_A},\y'_1,\ldots,\y_{N_B},\z_1,\ldots,\z_{N_C};t) \times \nonumber \\
& & \times \Psi^{(ABC)}(\x_1,\ldots,\x_{N_A},\y_1,\ldots,\y_{N_B},\z_1,\ldots,\z_{N_C};t) = \nonumber \\
& & = \left<\Psi^{(ABC)}(t)\left| \left\{
\hat{\mathbf \Psi}_A^\dag(\x'_1)
\hat{\mathbf \Psi}_A(\x_1)
\hat{\mathbf \Psi}_B^\dag(\y'_1)
\hat{\mathbf \Psi}_B(\y_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum^{M_A,M_B}_{k,k',q,q'=1}
\rho^{(AB)}_{kk'qq'}(t) \phi^\ast_{k}(\x'_1,t) \phi_{q}(\x_1,t)
\psi^\ast_{k'}(\y'_1,t) \psi_{q'}(\y_1,t), \
\end{equation}n
where its matrix elements in the orbital basis are give by:
\begin{equation}
\rho^{(AB)}_{kk'qq'}(t)=
\sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kq}\hat \rho^{(B)}_{k'q'}}_{\vec J}(t).
\end{equation}
\begin{equation}n\label{DNS_AC}
& & \rho^{(AC)}(\x_1,\z_1|\x'_1,\z'_1;t) = N_A N_C \int d\x_2 \cdots d\x_{N_A}
d\y_1 \cdots d\y_{N_B} d\z_2 \cdots d\z_{N_C} \times \nonumber \\
& & \times {\Psi^{(ABC)}}^\ast(\x'_1,\ldots,\x_{N_A},\y_1,\ldots,\y_{N_B},\z'_1,\ldots,\z_{N_C};t) \times \nonumber \\
& & \times \Psi^{(ABC)}(\x_1,\ldots,\x_{N_A},\y_1,\ldots,\y_{N_B},\z_1,\ldots,\z_{N_C};t) = \nonumber \\
& & = \left<\Psi^{(ABC)}(t)\left| \left\{
\hat{\mathbf \Psi}_A^\dag(\x'_1)
\hat{\mathbf \Psi}_A(\x_1)
\hat{\mathbf \Psi}_C^\dag(\z'_1)
\hat{\mathbf \Psi}_C(\z_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum^{M_A,M_B}_{k,k'',q,q''=1}
\rho^{(AC)}_{kk''qq''}(t) \phi^\ast_{k}(\x'_1,t) \phi_{q}(\x_1,t)
\chi^\ast_{k''}(\z'_1,t) \chi_{q''}(\z_1,t), \
\end{equation}n
where its matrix elements in the orbital basis are given by:
\begin{equation}
\rho^{(AC)}_{kk''qq''}(t)=
\sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kq}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t).
\end{equation}
\begin{equation}n\label{DNS_BC}
& & \rho^{(BC)}(\y_1,\z_1|\y'_1,\z'_1;t) = N_B N_C \int d\x_1 \cdots d\x_{N_A}
d\y_2 \cdots d\y_{N_B} d\z_2 \cdots d\z_{N_C} \times \nonumber \\
& & \times {\Psi^{(ABC)}}^\ast(\x_1,\ldots,\x_{N_A},\y'_1,\ldots,\y_{N_B},\z'_1,\ldots,\z_{N_C};t) \times \nonumber \\
& & \times \Psi^{(ABC)}(\x_1,\ldots,\x_{N_A},\y_1,\ldots,\y_{N_B},\z_1,\ldots,\z_{N_C};t) = \nonumber \\
& & = \left<\Psi^{(ABC)}(t)\left| \left\{
\hat{\mathbf \Psi}_B^\dag(\y'_1)
\hat{\mathbf \Psi}_B(\y_1)
\hat{\mathbf \Psi}_C^\dag(\z'_1)
\hat{\mathbf \Psi}_C(\z_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum^{M_B,M_B}_{k',k'',q',q''=1}
\rho^{(BC)}_{k'k''q'q''}(t) \psi^\ast_{k'}(\y'_1,t) \psi_{q'}(\y_1,t)
\chi^\ast_{k''}(\z'_1,t) \chi_{q''}(\z_1,t), \
\end{equation}n
where its matrix elements in the orbital basis are given by:
\begin{equation}
\rho^{(AC)}_{k'k''q'q''}(t)=
\sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(B)}_{k'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t).
\end{equation}
\subsection*{Inter-species reduced three-body density matrices}
There are six reduced three-body density matrices
which are associated with
the three-body interactions of two
identical particles with a third distinct one.
\begin{equation}n\label{DNS_AAB}
& & \rho^{(AAB)}(\x_1,\x_2,\y_1|\x'_1,\x'_2,\y'_1;t) = \\
& & = N_A(N_A-1) N_B \int \, d\x_3 \cdots d\x_{N_A}
d\y_2 \cdots d\y_{N_B} d\z_1 \cdots d\z_{N_C} \times \nonumber \\
& & \times {\Psi^{(ABC)}}^\ast(\x'_1,\x'_2,\ldots,\x_{N_A},\y'_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t)
\times \nonumber \\
& & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\
& & = \left<\Psi^{(ABC)}(t)\left| \left\{
\hat{\mathbf \Psi}_A^\dag(\x'_1)
\hat{\mathbf \Psi}_A^\dag(\x'_2)
\hat{\mathbf \Psi}_A(\x_2)
\hat{\mathbf \Psi}_A(\x_1)
\hat{\mathbf \Psi}_B^\dag(\y'_1)
\hat{\mathbf \Psi}_B(\y_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum^{M_A,M_B}_{k,k',s,l,q,q'=1}
\rho^{(AAB)}_{kk'slqq'}(t) \phi^\ast_{k}(\x'_1,t) \phi^\ast_s(\x'_2,t) \phi_l(\x_2,t) \phi_{q}(\x_1,t)
\psi^\ast_{k'}(\y'_1,t) \psi_{q'}(\y_1,t), \nonumber \
\end{equation}n
where
\begin{equation}
\rho^{(AAB)}_{kk'slqq'}(t)=
\sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kslq}\hat \rho^{(B)}_{k'q'}}_{\vec J}(t)
\end{equation}
are its matrix elements
in the orbital basis.
| 3,585 | 51,553 |
en
|
train
|
0.176.13
|
\begin{equation}n\label{DNS_BC}
& & \rho^{(BC)}(\y_1,\z_1|\y'_1,\z'_1;t) = N_B N_C \int d\x_1 \cdots d\x_{N_A}
d\y_2 \cdots d\y_{N_B} d\z_2 \cdots d\z_{N_C} \times \nonumber \\
& & \times {\Psi^{(ABC)}}^\ast(\x_1,\ldots,\x_{N_A},\y'_1,\ldots,\y_{N_B},\z'_1,\ldots,\z_{N_C};t) \times \nonumber \\
& & \times \Psi^{(ABC)}(\x_1,\ldots,\x_{N_A},\y_1,\ldots,\y_{N_B},\z_1,\ldots,\z_{N_C};t) = \nonumber \\
& & = \left<\Psi^{(ABC)}(t)\left| \left\{
\hat{\mathbf \Psi}_B^\dag(\y'_1)
\hat{\mathbf \Psi}_B(\y_1)
\hat{\mathbf \Psi}_C^\dag(\z'_1)
\hat{\mathbf \Psi}_C(\z_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum^{M_B,M_B}_{k',k'',q',q''=1}
\rho^{(BC)}_{k'k''q'q''}(t) \psi^\ast_{k'}(\y'_1,t) \psi_{q'}(\y_1,t)
\chi^\ast_{k''}(\z'_1,t) \chi_{q''}(\z_1,t), \
\end{equation}n
where its matrix elements in the orbital basis are given by:
\begin{equation}
\rho^{(AC)}_{k'k''q'q''}(t)=
\sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(B)}_{k'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t).
\end{equation}
\subsection*{Inter-species reduced three-body density matrices}
There are six reduced three-body density matrices
which are associated with
the three-body interactions of two
identical particles with a third distinct one.
\begin{equation}n\label{DNS_AAB}
& & \rho^{(AAB)}(\x_1,\x_2,\y_1|\x'_1,\x'_2,\y'_1;t) = \\
& & = N_A(N_A-1) N_B \int \, d\x_3 \cdots d\x_{N_A}
d\y_2 \cdots d\y_{N_B} d\z_1 \cdots d\z_{N_C} \times \nonumber \\
& & \times {\Psi^{(ABC)}}^\ast(\x'_1,\x'_2,\ldots,\x_{N_A},\y'_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t)
\times \nonumber \\
& & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\
& & = \left<\Psi^{(ABC)}(t)\left| \left\{
\hat{\mathbf \Psi}_A^\dag(\x'_1)
\hat{\mathbf \Psi}_A^\dag(\x'_2)
\hat{\mathbf \Psi}_A(\x_2)
\hat{\mathbf \Psi}_A(\x_1)
\hat{\mathbf \Psi}_B^\dag(\y'_1)
\hat{\mathbf \Psi}_B(\y_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum^{M_A,M_B}_{k,k',s,l,q,q'=1}
\rho^{(AAB)}_{kk'slqq'}(t) \phi^\ast_{k}(\x'_1,t) \phi^\ast_s(\x'_2,t) \phi_l(\x_2,t) \phi_{q}(\x_1,t)
\psi^\ast_{k'}(\y'_1,t) \psi_{q'}(\y_1,t), \nonumber \
\end{equation}n
where
\begin{equation}
\rho^{(AAB)}_{kk'slqq'}(t)=
\sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kslq}\hat \rho^{(B)}_{k'q'}}_{\vec J}(t)
\end{equation}
are its matrix elements
in the orbital basis.
\begin{equation}n\label{DNS_ABB}
& & \rho^{(ABB)}(\x_1,\y_1,\y_2|\x'_1,\y'_1,\y'_2;t) = \\
& & = N_A N_B (N_B-1) \int \, d\x_2 \cdots d\x_{N_A}
d\y_3 \cdots d\y_{N_B} d\z_1 \cdots d\z_{N_C} \times \nonumber \\
& & \times {\Psi^{(ABC)}}^\ast(\x'_1,\x_2,\ldots,\x_{N_A},\y'_1,\y'_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t)
\times \nonumber \\
& & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\
& & = \left<\Psi^{(ABC)}(t)\left| \left\{
\hat{\mathbf \Psi}_A^\dag(\x'_1)
\hat{\mathbf \Psi}_A(\x_1)
\hat{\mathbf \Psi}_B^\dag(\y'_1)
\hat{\mathbf \Psi}_B^\dag(\y'_2)
\hat{\mathbf \Psi}_B(\y_2)
\hat{\mathbf \Psi}_B(\y_1)
\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum^{M_A,M_B}_{k,k',s',l',q,q'=1}
\rho^{(ABB)}_{kk's'l'qq'}(t)
\phi^\ast_{k}(\x'_1,t)
\phi_{q}(\x_1,t)
\psi^\ast_{k'}(\y'_1,t)
\psi^\ast_{s'}(\y'_2,t)
\psi_{l'}(\y_2,t)
\psi_{q'}(\y_1,t), \nonumber \
\end{equation}n
where
\begin{equation}
\rho^{(ABB)}_{kk's'l'qq'}(t)=
\sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kq}\hat \rho^{(B)}_{k's'l'q'}}_{\vec J}(t)
\end{equation}
are its matrix elements in the orbital basis.
\begin{equation}n\label{DNS_AAC}
& & \rho^{(AAC)}(\x_1,\x_2,\z_1|\x'_1,\x'_2,\z'_1;t) = \\
& & = N_A(N_A-1) N_C \int \, d\x_3 \cdots d\x_{N_A}
d\y_1 \cdots d\y_{N_B} d\z_2 \cdots d\z_{N_C} \times \nonumber \\
& & \times {\Psi^{(ABC)}}^\ast(\x'_1,\x'_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z'_1,\z_2,\ldots,\z_{N_C};t)
\times \nonumber \\
& & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\
& & = \left<\Psi^{(ABC)}(t)\left| \left\{
\hat{\mathbf \Psi}_A^\dag(\x'_1)
\hat{\mathbf \Psi}_A^\dag(\x'_2)
\hat{\mathbf \Psi}_A(\x_2)
\hat{\mathbf \Psi}_A(\x_1)
\hat{\mathbf \Psi}_C^\dag(\z'_1)
\hat{\mathbf \Psi}_C(\z_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum^{M_A,M_C}_{k,k'',s,l,q,q''=1}
\rho^{(AAC)}_{kk''slqq''}(t) \phi^\ast_{k}(\x'_1,t) \phi^\ast_s(\x'_2,t) \phi_l(\x_2,t) \phi_{q}(\x_1,t)
\chi^\ast_{k''}(\z'_1,t) \chi_{q''}(\z_1,t), \nonumber \
\end{equation}n
where
\begin{equation}
\rho^{(AAC)}_{kk''slqq''}(t)=
\sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kslq}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t)
\end{equation}
are its matrix elements in the orbital basis.
\begin{equation}n\label{DNS_ACC}
& & \rho^{(ACC)}(\x_1,\z_1,\z_2|\x'_1,\z'_1,\z'_2;t) = \\
& & = N_A N_C (N_C-1) \int \, d\x_2 \cdots d\x_{N_A}
d\y_1 \cdots d\y_{N_B} d\z_3 \cdots d\z_{N_C} \times \nonumber \\
& & \times {\Psi^{(ABC)}}^\ast(\x'_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z'_1,\z'_2,\ldots,\z_{N_C};t)
\times \nonumber \\
& & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\
& & = \left<\Psi^{(ABC)}(t)\left| \left\{
\hat{\mathbf \Psi}_A^\dag(\x'_1)
\hat{\mathbf \Psi}_A(\x_1)
\hat{\mathbf \Psi}_C^\dag(\z'_1)
\hat{\mathbf \Psi}_C^\dag(\z'_2)
\hat{\mathbf \Psi}_C(\z_2)
\hat{\mathbf \Psi}_C(\z_1)
\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum^{M_A,M_C}_{k,k'',s'',l'',q,q''=1}
\rho^{(ACC)}_{kk''s''l''qq''}(t)
\phi^\ast_{k}(\x'_1,t)
\phi_{q}(\x_1,t)
\chi^\ast_{k''}(\z'_1,t)
\chi^\ast_{s''}(\z'_2,t)
\chi_{l''}(\z_2,t)
\chi_{q''}(\z_1,t), \nonumber \
\end{equation}n
where
\begin{equation}
\rho^{(ACC)}_{kk''s''l''qq''}(t)=
\sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kq}\hat \rho^{(C)}_{k''s''l''q''}}_{\vec J}(t)
\end{equation}
are its matrix elements in the orbital basis.
\begin{equation}n\label{DNS_BBC}
& & \rho^{(BBC)}(\y_1,\y_2,\z_1|\y'_1,\y'_2,\z'_1;t) = \\
& & = N_B(N_B-1) N_C \int \, d\x_1 \cdots d\x_{N_A}
d\y_3 \cdots d\y_{N_B} d\z_2 \cdots d\z_{N_C} \times \nonumber \\
& & \times {\Psi^{(ABC)}}^\ast(\x_1,\x_2,\ldots,\x_{N_A},\y'_1,\y'_2,\ldots,\y_{N_B},\z'_1,\z_2,\ldots,\z_{N_C};t)
\times \nonumber \\
& & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\
& & = \left<\Psi^{(ABC)}(t)\left| \left\{
\hat{\mathbf \Psi}_B^\dag(\y'_1)
\hat{\mathbf \Psi}_B^\dag(\y'_2)
\hat{\mathbf \Psi}_B(\y_2)
\hat{\mathbf \Psi}_B(\y_1)
\hat{\mathbf \Psi}_C^\dag(\z'_1)
\hat{\mathbf \Psi}_C(\z_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum^{M_B,M_C}_{k',k'',s',l',q',q''=1}
\rho^{(BBC)}_{k'k''s'l'q'q''}(t) \psi^\ast_{k'}(\y'_1,t) \psi^\ast_{s'}(\y'_2,t) \psi_{l'}(\y_2,t) \phi_{q'}(\y_1,t)
\chi^\ast_{k''}(\z'_1,t) \chi_{q''}(\z_1,t), \nonumber \
\end{equation}n
where
\begin{equation}
\rho^{(BBC)}_{k'k''s'l'q'q''}(t)=
\sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(B)}_{k's'l'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t)
\end{equation}
are its matrix elements in the orbital basis.
| 3,671 | 51,553 |
en
|
train
|
0.176.14
|
\begin{equation}n\label{DNS_ACC}
& & \rho^{(ACC)}(\x_1,\z_1,\z_2|\x'_1,\z'_1,\z'_2;t) = \\
& & = N_A N_C (N_C-1) \int \, d\x_2 \cdots d\x_{N_A}
d\y_1 \cdots d\y_{N_B} d\z_3 \cdots d\z_{N_C} \times \nonumber \\
& & \times {\Psi^{(ABC)}}^\ast(\x'_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z'_1,\z'_2,\ldots,\z_{N_C};t)
\times \nonumber \\
& & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\
& & = \left<\Psi^{(ABC)}(t)\left| \left\{
\hat{\mathbf \Psi}_A^\dag(\x'_1)
\hat{\mathbf \Psi}_A(\x_1)
\hat{\mathbf \Psi}_C^\dag(\z'_1)
\hat{\mathbf \Psi}_C^\dag(\z'_2)
\hat{\mathbf \Psi}_C(\z_2)
\hat{\mathbf \Psi}_C(\z_1)
\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum^{M_A,M_C}_{k,k'',s'',l'',q,q''=1}
\rho^{(ACC)}_{kk''s''l''qq''}(t)
\phi^\ast_{k}(\x'_1,t)
\phi_{q}(\x_1,t)
\chi^\ast_{k''}(\z'_1,t)
\chi^\ast_{s''}(\z'_2,t)
\chi_{l''}(\z_2,t)
\chi_{q''}(\z_1,t), \nonumber \
\end{equation}n
where
\begin{equation}
\rho^{(ACC)}_{kk''s''l''qq''}(t)=
\sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kq}\hat \rho^{(C)}_{k''s''l''q''}}_{\vec J}(t)
\end{equation}
are its matrix elements in the orbital basis.
\begin{equation}n\label{DNS_BBC}
& & \rho^{(BBC)}(\y_1,\y_2,\z_1|\y'_1,\y'_2,\z'_1;t) = \\
& & = N_B(N_B-1) N_C \int \, d\x_1 \cdots d\x_{N_A}
d\y_3 \cdots d\y_{N_B} d\z_2 \cdots d\z_{N_C} \times \nonumber \\
& & \times {\Psi^{(ABC)}}^\ast(\x_1,\x_2,\ldots,\x_{N_A},\y'_1,\y'_2,\ldots,\y_{N_B},\z'_1,\z_2,\ldots,\z_{N_C};t)
\times \nonumber \\
& & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\
& & = \left<\Psi^{(ABC)}(t)\left| \left\{
\hat{\mathbf \Psi}_B^\dag(\y'_1)
\hat{\mathbf \Psi}_B^\dag(\y'_2)
\hat{\mathbf \Psi}_B(\y_2)
\hat{\mathbf \Psi}_B(\y_1)
\hat{\mathbf \Psi}_C^\dag(\z'_1)
\hat{\mathbf \Psi}_C(\z_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum^{M_B,M_C}_{k',k'',s',l',q',q''=1}
\rho^{(BBC)}_{k'k''s'l'q'q''}(t) \psi^\ast_{k'}(\y'_1,t) \psi^\ast_{s'}(\y'_2,t) \psi_{l'}(\y_2,t) \phi_{q'}(\y_1,t)
\chi^\ast_{k''}(\z'_1,t) \chi_{q''}(\z_1,t), \nonumber \
\end{equation}n
where
\begin{equation}
\rho^{(BBC)}_{k'k''s'l'q'q''}(t)=
\sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(B)}_{k's'l'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t)
\end{equation}
are its matrix elements in the orbital basis.
\begin{equation}n\label{DNS_BCC}
& & \rho^{(BCC)}(\y_1,\z_1,\z_2|\y'_1,\z'_1,\z'_2;t) = \\
& & = N_B N_C (N_C-1) \int \, d\x_1 \cdots d\x_{N_A}
d\y_2 \cdots d\y_{N_B} d\z_3 \cdots d\z_{N_C} \times \nonumber \\
& & \times {\Psi^{(ABC)}}^\ast(\x_1,\x_2,\ldots,\x_{N_A},\y'_1,\y_2,\ldots,\y_{N_B},\z'_1,\z'_2,\ldots,\z_{N_C};t)
\times \nonumber \\
& & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\
& & = \left<\Psi^{(ABC)}(t)\left| \left\{
\hat{\mathbf \Psi}_B^\dag(\y'_1)
\hat{\mathbf \Psi}_B(\y_1)
\hat{\mathbf \Psi}_C^\dag(\z'_1)
\hat{\mathbf \Psi}_C^\dag(\z'_2)
\hat{\mathbf \Psi}_C(\z_2)
\hat{\mathbf \Psi}_C(\z_1)
\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum^{M_B,M_C}_{k',k'',s'',l'',q',q''=1}
\rho^{(BCC)}_{k'k''s''l''q'q''}(t)
\psi^\ast_{k'}(\y'_1,t)
\psi_{q'}(\y_1,t)
\chi^\ast_{k''}(\z'_1,t)
\chi^\ast_{s''}(\z'_2,t)
\chi_{l''}(\z_2,t)
\chi_{q''}(\z_1,t), \nonumber \
\end{equation}n
where
\begin{equation}
\rho^{(BCC)}_{k'k''s''l''q'q''}(t)=
\sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(B)}_{k'q'}\hat \rho^{(C)}_{k''s''l''q''}}_{\vec J}(t)
\end{equation}
are its matrix elements in the orbital basis.
Finally, there is a single reduced three-body density matrix
which is associated with
the three-body interaction of three
distinct particles.
\begin{equation}n\label{DNS_ABC}
& & \rho^{(ABC)}(\x_1,\y_1,\z_1|\x'_1,\y'_1,\z'_1;t) = \\
& & = N_A N_B N_C \int \, d\x_2 \cdots d\x_{N_A}
d\y_2 \cdots d\y_{N_B} d\z_2 \cdots d\z_{N_C} \times \nonumber \\
& & \times {\Psi^{(ABC)}}^\ast(\x'_1,\x_2,\ldots,\x_{N_A},\y'_1,\y_2,\ldots,\y_{N_B},\z'_1,\z_2,\ldots,\z_{N_C};t)
\times \nonumber \\
& & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\
& & = \left<\Psi^{(ABC)}(t)\left| \left\{
\hat{\mathbf \Psi}_A^\dag(\x'_1)
\hat{\mathbf \Psi}_A(\x_1)
\hat{\mathbf \Psi}_B^\dag(\y'_1)
\hat{\mathbf \Psi}_B(\y_1)
\hat{\mathbf \Psi}_C^\dag(\z'_2)
\hat{\mathbf \Psi}_C(\z_1)
\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\
& & = \sum^{M_A,M_B,M_C}_{k,k',k'',q,q',q''=1}
\rho^{(ABC)}_{kk'k''qq'q''}(t)
\phi^\ast_{k}(\x'_1,t)
\phi_{q}(\x_1,t)
\psi^\ast_{k'}(\y'_1,t)
\psi_{q'}(\y_1,t),
\chi^\ast_{k}(\y'_1,t)
\chi_{q}(\y_1,t)
\nonumber \
\end{equation}n
where
\begin{equation}
\rho^{(ABC)}_{kk'k''qq'q'}(t)=
\sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kq}\hat \rho^{(B)}_{k'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t)
\end{equation}
are its matrix elements in the orbital basis.
| 2,581 | 51,553 |
en
|
train
|
0.176.15
|
\section{Further details of the derivation of the equations-of-motion
for mixtures of three kinds of identical particles}\label{appendix_C}
The derivation of the equations-of-motion
for the orbitals (\ref{EOM_final_orbitals_3mix})
starts from expressing the expectation value
of $\hat H^{(ABC)}$ with respect to the
many-particle wave-function $\left|\Psi^{(ABC)}\right>$
in a form which depends explicitly
on the various integrals with respect to the orbitals.
Thus we have:
\begin{equation}n\label{expectation_ALL_orbitals}
& &\left<\Psi^{(ABC)}\left|\hat H^{(ABC)}
- i\frac{\partial}{\partial t}\right|\Psi^{(ABC)}\right> =
\sum_{k,q=1}^{M_A} \rho^{(A)}_{kq} \left[ h^{(A)}_{kq} -
\left\{i\frac{\partial}{\partial t}^{(A)}\right\}_{kq} \right] + \nonumber \\
&& + \frac{1}{2}\sum_{k,s,l,q=1}^{M_A} \rho^{(A)}_{kslq} W^{(A)}_{ksql}
+ \frac{1}{6}\sum_{k,s,p,r,l,q=1}^{M_A} \rho^{(A)}_{ksprlq} U^{(A)}_{kspqlr} + \nonumber \\
&& + \sum_{k',q'=1}^{M_B} \rho^{(B)}_{k'q'} \left[ h^{(B)}_{k'q'} -
\left\{i\frac{\partial}{\partial t}^{(B)}\right\}_{k'q'} \right] + \nonumber \\
&& + \frac{1}{2}\sum_{k',s',l',q'=1}^{M_B} \rho^{(B)}_{k's'l'q'} W^{(B)}_{k's'q'l'}
+ \frac{1}{6}\sum_{k',s',p',r',l',q'=1}^{M_B} \rho^{(B)}_{k's'p'r'l'q'} U^{(B)}_{k's'p'q'l'r'} + \nonumber \\
&& + \sum_{k'',q''=1}^{M_C} \rho^{(C)}_{k''q''} \left[ h^{(C)}_{k''q''} -
\left\{i\frac{\partial}{\partial t}^{(C)}\right\}_{k''q''} \right] + \nonumber \\
&& + \frac{1}{2}\sum_{k'',s'',l'',q''=1}^{M_C} \rho^{(C)}_{k''s''l''q''} W^{(C)}_{k''s''q''l''} + \nonumber \\
& & + \frac{1}{6}\sum_{k'',s'',p'',r'',l'',q''=1}^{M_C} \rho^{(C)}_{k''s''p''r''l''q''} U^{(C)}_{k''s''p''q''l''r''}
+ \\
& & + \sum_{k,k',q,q'=1}^{M_A,M_B} \rho^{(AB)}_{kk'qq'} W^{(AB)}_{kk'qq'}
+ \sum_{k,k'',q,q''=1}^{M_A,M_C} \rho^{(AC)}_{kk''qq''} W^{(AC)}_{kk''qq''} + \nonumber \\
& & + \sum_{k',k'',q',q''=1}^{M_B,M_C} \rho^{(BC)}_{k'k''q'q''} W^{(BC)}_{k'k''q'q''} + \nonumber \\
& & + \frac{1}{2} \sum_{k,k',s,q,q',l=1}^{M_A,M_B} \rho^{(AAB)}_{kk'slqq'} U^{(AAB)}_{kk'sqq'l}
+ \frac{1}{2} \sum_{k,k',s',q,q',l'=1}^{M_A,M_B} \rho^{(ABB)}_{kk's'l'qq'} U^{(ABB)}_{kk's'qq'l'} + \nonumber \\
& & + \frac{1}{2} \sum_{k,k'',s,q,q'',l=1}^{M_A,M_C} \rho^{(AAC)}_{kk''slqq''} U^{(AAC)}_{kk''sqq''l}
+ \frac{1}{2} \sum_{k,k'',s'',q,q'',l''=1}^{M_A,M_C} \rho^{(ACC)}_{kk''s''l''qq''} U^{(ACC)}_{kk''s''qq''l''}
+ \nonumber \\
& & + \frac{1}{2} \sum_{k',k'',s',q',q'',l'=1}^{M_B,M_C} \rho^{(BBC)}_{k'k''s'l'q'q''} U^{(BBC)}_{k'k''s'q'q''l'}
+ \frac{1}{2} \sum_{k',k'',s'',q',q'',l''=1}^{M_B,M_C} \rho^{(BCC)}_{k'k''s''l''q'q''} U^{(BCC)}_{k'k''s''q'q''l''} +
\nonumber \\
& & + \sum_{k,k',k'',q,q',q''=1}^{M_A,M_B,M_C} \rho^{(ABC)}_{kk'k''qq'q''} U^{(ABC)}_{kk'k''qq'q''}
- \sum_{\{\vec J\}}
i C^\ast_{\vec J}(t) \dot C_{\vec J}(t). \nonumber \
\end{equation}n
The expectation values of the
various density operators appearing in (\ref{expectation_ALL_orbitals})
have been prescribed in Appendix \ref{appendix_B}.
The matrix elements in (\ref{expectation_ALL_orbitals}) of the $A$ and correspondingly of the $B$ and $C$
single-species terms with respect to the orbitals have been discussed in section \ref{SEC2.1},
see Eq.~(\ref{matrix_elements}).
The matrix elements arising from two-body inter-species interactions
are listed for completeness below:
\begin{equation}n\label{MIX_matrix_elements_2B}
& & W^{(AB)}_{kk'qq'} = \int \!\! \int \phi_k^\ast(\x,t) \psi_{k'}^\ast(\y,t) \hat W^{(AB)}(\x,\y)
\phi_q(\x,t) \psi_{q'}(\y,t) d{\bf x}d\y, \nonumber \\
& & W^{(AC)}_{kk''qq''} = \int \!\! \int \phi_k^\ast(\x,t) \chi_{k''}^\ast(\z,t) \hat W^{(AC)}(\x,\z)
\phi_q(\x,t) \chi_{q''}(\z,t) d{\bf x}d\z, \nonumber \\
& & W^{(BC)}_{k'k''q'q''} = \int \!\! \int \psi_{k'}^\ast(\y,t) \chi_{k''}^\ast(\z,t) \hat W^{(BC)}(\y,\z)
\psi_{q'}(\y,t) \chi_{q''}(\z,t) d{\bf y}d\z, \
\end{equation}n
and the matrix elements arising from
three-body inter-species interactions read
as follows:
\begin{equation}n\label{MIX_matrix_elements_3B}
& & U^{(AAB)}_{kk'sqq'l} = \nonumber \\
& & = \int \!\! \int \!\! \int
\phi_k^\ast(\x,t) \phi_s^\ast(\x',t) \psi_{k'}^\ast(\y,t) \hat U^{(AAB)}(\x,\x',\y)
\phi_q(\x,t) \phi_l(\x',t) \psi_{q'}(\y,t) d{\bf x}d\x' d\y, \nonumber \\
& & U^{(ABB)}_{kk's'qq'l'} = \nonumber \\
& & = \int \!\! \int \!\! \int
\phi_k^\ast(\x,t) \psi_{k'}^\ast(\y,t) \psi_{s'}^\ast(\y',t) \hat U^{(ABB)}(\x,\y,\y')
\phi_q(\x,t) \psi_{q'}(\y,t) \psi_{l'}(\y',t) d{\bf x}d{\bf y}d\y', \nonumber \\
& & U^{(AAC)}_{kk''sqq''l} = \nonumber \\
& & = \int \!\! \int \!\! \int
\phi_k^\ast(\x,t) \phi_s^\ast(\x',t) \chi_{k''}^\ast(\z,t) \hat U^{(AAC)}(\x,\x',\z)
\phi_q(\x,t) \phi_l(\x',t) \chi_{q''}(\z,t) d{\bf x}d\x' d\z, \nonumber \\
& & U^{(ACC)}_{kk''s''qq''l''} = \nonumber \\
& & = \int \!\! \int \!\! \int
\phi_k^\ast(\x,t) \chi_{k''}^\ast(\z,t) \chi_{s''}^\ast(\z',t) \hat U^{(ACC)}(\x,\z,\z')
\phi_q(\x,t) \chi_{q''}(\z,t) \chi_{l''}(\z',t) d{\bf x}d{\bf z}d\z', \nonumber \\
& & U^{(BBC)}_{k'k''s'q'q''l'} = \nonumber \\
& & = \int \!\! \int \!\! \int
\psi_{k'}^\ast(\y,t) \psi_{s'}^\ast(\y',t) \chi_{k''}^\ast(\z,t) \hat U^{(BBC)}(\y,\y',\z)
\psi_{q'}(\y,t) \psi_{l'}(\y',t) \chi_{q''}(\z,t) d{\bf y}d\y' d\z, \nonumber \\
& & U^{(BCC)}_{k'k''s''q'q''l''} = \nonumber \\
& & = \int \!\! \int \!\! \int
\psi_{k'}^\ast(\y,t) \chi_{k''}^\ast(\z,t) \chi_{s''}^\ast(\z',t) \hat U^{(BCC)}(\y,\z,\z')
\psi_{q'}(\y,t) \chi_{q''}(\z,t) \chi_{l''}(\z',t) d{\bf y}d{\bf z}d\z', \nonumber \\
& & U^{(ABC)}_{kk'k''qq'q''} = \nonumber \\
& & = \int \!\! \int \!\! \int
\phi_{k}^\ast(\x,t) \psi_{k'}^\ast(\y,t) \chi_{k''}^\ast(\z,t) \hat U^{(ABC)}(\x,\y,\z)
\phi_{q}(\x,t) \psi_{q'}(\y,t) \chi_{q''}(\z,t) d{\bf x}d{\bf y}d\z. \nonumber \\
\end{equation}n
Performing the variation of the integrals (\ref{MIX_matrix_elements_2B})
with respect to the orbitals $\left\{\phi_k(\x,t)\right\}$,
$\left\{\psi_{k'}(\y,t)\right\}$
and
$\left\{\chi_{k''}(\z,t)\right\}$,
we find six types of inter-species
one-body potentials
emerging
from two-body interactions:
\begin{equation}n\label{all_local_2B_potentials}
& & \hat W^{(AB)}_{k'q'}(\x,t) = \int \psi_{k'}^\ast(\y,t)
\hat W^{(AB)}(\x,\y) \psi_{q'}(\y,t) d\y, \nonumber \\
& & \hat W^{(BA)}_{kq}(\y,t) = \int \phi_{k}^\ast(\x,t)
\hat W^{(AB)}(\x,\y) \phi_{q}(\x,t) d\x, \nonumber \\
& & \hat W^{(AC)}_{k''q''}(\x,t) = \int \chi_{k''}^\ast(\z,t)
\hat W^{(AC)}(\x,\z) \chi_{q''}(\z,t) d\z, \nonumber \\
& & \hat W^{(CA)}_{kq}(\z,t) = \int \phi_{k}^\ast(\x,t)
\hat W^{(AC)}(\x,\z) \phi_{q}(\x,t) d\x, \nonumber \\
& & \hat W^{(BC)}_{k''q''}(\y,t) = \int \chi_{k''}^\ast(\z,t)
\hat W^{(BC)}(\y,\z) \chi_{q''}(\z,t) d\z, \nonumber \\
& & \hat W^{(CB)}_{k'q'}(\z,t) = \int \psi_{k'}^\ast(\y,t)
\hat W^{(BC)}(\y,\z) \psi_{q'}(\y,t) d\y.
\end{equation}n
Making the variation of the integrals (\ref{MIX_matrix_elements_3B})
with respect to the orbitals,
we arrive at fifteen types
of inter-species one-body potentials
resulting from three-body interactions:
| 3,427 | 51,553 |
en
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.