set
stringclasses 1
value | id
stringlengths 5
9
| chunk_text
stringlengths 1
115k
| chunk_num_tokens
int64 1
106k
| document_num_tokens
int64 58
521k
| document_language
stringclasses 2
values |
---|---|---|---|---|---|
train
|
0.4975.3
|
Recall that the output of cycle benchmarking is a product of Pauli fidelities (including SPAM noise). We further show that without loss of generality this is the only type of information that we need to obtain from quantum experiments for the purpose of noise learning. This is because in general the output probability of any quantum experiment can be expressed as a sum of products of Pauli fidelities, and each individual product can be learned by cycle benchmarking (Supplementary Section IV). We therefore consider learning functions of the noise model that can be expressed as a product of Pauli fidelities (also see below Eq.~\eqref{eq:mainapproximation} for a related discussion). This can be reduced to considering functions of the form $f=\sum_{a,\mathcal G}v_{a}^{\mathcal G}\cdot l_a^{\mathcal G}$, where $l_a^{\mathcal G}:=\log \lambda_a^{\mathcal G}$ is the log Pauli fidelity, $v_{a}^{\mathcal G}\in\mathbb{R}$, and the superscript $\mathcal G$ denotes the corresponding Clifford gate. In the CNOT example $l_{IZ}+l_{ZZ}$ is a learnable function. The idea of learning log Pauli fidelities in benchmarking has also been considered in~\cite{flammia2021averaged,nielsen2022first}. The advantage of considering log Pauli fidelities here is that the set of all learnable functions $f$ forms a vector space. Therefore to characterize all independent learnable degrees of freedom, we only need to determine a basis of the vector space.
\begin{figure}
\caption{Pattern transfer graph of CNOT, SWAP, and a gate set consisting of CNOT and SWAP. Here, multiple edges are represented by a single edge with multiple labels.
The labels on the first two graphs are gate dependent, though we omit the superscripts of CNOT or SWAP.
The labels on the last graph are a combination of the first two graphs and are omitted for clarity.}
\label{fig:main_patterntransfer}
\end{figure}
Recall that the reason that $l_{IZ}+l_{ZZ}$ is learnable in the CNOT example is because the path of Pauli operator in the cycle benchmarking circuit forms a cycle $IZ\to ZZ\to IZ\to\cdots$, and the product of Pauli fidelities along the cycle ($\lambda_{IZ}\lambda_{ZZ}$) can be learned via curve fitting. In general, as we can also insert single qubit Clifford gates in between, we do not need to differentiate between $X,Y,Z$. We therefore consider the \emph{pattern transfer graph} associated with a Clifford gate set where vertices corresponds to binary Pauli weight patterns and each edge is labeled by the Pauli fidelity of the incoming Pauli operator. The graph has $2^n$ vertices and $m\cdot 4^n$ directed edges. They can also be merged to form the pattern transfer graph of the gate set $\{\mathrm{CNOT},\mathrm{SWAP}\}$. Fig.~\ref{fig:main_patterntransfer} shows the pattern transfer graph of CNOT, SWAP, and the gate set of $\{\text{CNOT}, \text{SWAP}\}$. Consider an arbitrary cycle in the pattern transfer graph $C=(e_1,\dots,e_k)$ where each edge $e_i$ is associated with some Pauli fidelity $\lambda_i$. Following Fig.~\ref{fig:main_cb} (b), a cycle benchmarking circuit can be constructed which learns the product of the Pauli fidelites along the cycle, or equivalently the function $f_C:=\sum_{e_i\in C}\log \lambda_i$ can be learned. This implies that the set of functions defined by linear combination of cycles $\{\sum_{C\in\text{cycles}}\alpha_C f_C:\alpha_C\in\mathbb{R}\}$ are learnable. In the following we show that this in fact corresponds to all learnable information about Pauli noise.
We label the edges of the pattern transfer graph as $e_1,\dots,e_M$ where $M=m\cdot 4^n$ and each edge $e_i$ is a variable that represents some log Pauli fidelity. The goal is to characterize the learnability of linear functions of the edge variables $f=\sum_{i=1}^M v_i e_i$, $v_i\in\mathbb{R}$. The set of linear functions can be equivalently understood as a vector space of dimension $M$, called the \emph{edge space} of the graph, where $f$ corresponds to a vector $(v_1,\dots,v_M)$ and we think of $e_1,\dots,e_M$ as the standard basis. Following the above discussion, the \emph{cycle space} of the graph is defined as $\mathrm{span}\{\sum_{e\in C}e:C\text{ is a cycle}\}$, which is a subspace of edge space. We also define another subspace, the \emph{cut space}, as $\mathrm{span}\{\sum_{e\in C}(-1)^{e\text{ from }V_1\text{ to }V_2}e:C\text{ is a cut between a partition of vertices }V_1,V_2\}$. It is known that the edge space is the orthogonal direct sum of cycle space and cut space for any graph~\cite{bollobas1998modern}. Interestingly, we show that the complementarity between cycle and cut space happens to be the dividing line that determines the learnability of Pauli noise.
\begin{theorem}\label{thm:mainpaulilearnability}
The vector space of learnable functions of the Pauli noise channels associated with an $n$-qubit Clifford gate set is equivalent to the cycle space of the pattern transfer graph. In other words,
\begin{equation}
\begin{aligned}
\text{All information}\quad &\equiv \quad \text{Edge space},\\
\text{Learnable information}\quad &\equiv \quad \text{Cycle space},\\
\text{Unlearnable information}\quad &\equiv \quad \text{Cut space}.\\
\end{aligned}
\end{equation}
This implies that the number of unlearnable degrees of freedom equals $2^n - c$, where $c$ is the number of connected components of the pattern transfer graph.
\end{theorem}
The learnability of cycle space follows from cycle benchmarking as discussed above. To prove the unlearnability of cut space, we use a similar argument as in Theorem~\ref{thm:mainpaulifidelity} and show that a gauge transformation can be constructed for each cut in the pattern transfer graph. By linearity, this implies that any vector in the cut space corresponds to a gauge transformation. By definition, a learnable function must be orthogonal to all such vectors and thus orthogonal to the entire cut space. More details of the proof are given in Supplementary Section II C.
It is a well-known fact in graph theory that the cycle space of a directed graph $G=(V,E)$ has dimension $|E|-|V|+c$ while the cut space has dimension $|V|-c$, where $c\geq 1$ is the number of connected components in $G$~\cite{bollobas1998modern}
(a (weakly) connected component is a maximal subgraph in which every vertex is reachable from every other vertex via an undirected path).
Theorem~\ref{thm:mainpaulilearnability} implies that among the $m\cdot 4^n$ degrees of freedom of the Pauli noise associated with a Clifford gate set, there are $2^n -c$ unlearnable degrees of freedom. This shows that while the number of unlearnable degrees of freedom can be exponentially large, they only occupy an exponentially small fraction of the entire space. In addition, a cycle and cut basis can be efficiently determined for a given graph, though in our case this takes exponential time because the pattern transfer graph itself is exponentially large. However, computing the cycle/cut basis is not the bottleneck as the information to be learned also grows exponentially with the number of qubits.
For small system sizes such as 2-qubit Clifford gates, we can write down a cycle basis as shown in Table~\ref{tab:main:CNOT_full} (a) for the CNOT and SWAP gates, which represents all learnable information about these gates. The CNOT gate has 2 unlearnable degrees of freedom while the SWAP gate has 1 unlearnable degree of freedom. As the pattern transfer graph has at least 2 connected components, we conclude that the Pauli noise channel of a 2-qubit Clifford gate has at most 2 unlearnable degrees of freedom. Note that when treating $\{\mathrm{CNOT},\mathrm{SWAP}\}$ together as a gate set, there are only 2 unlearnable degrees of freedom according to Theorem~\ref{thm:mainpaulilearnability} instead of $2+1=3$, because there is one additional learnable degree of freedom (such as $l_{IZ}^{\mathrm{CNOT}}+l_{XX}^{\mathrm{CNOT}}+l_{XI}^{\mathrm{SWAP}}$) that is a joint function of the two gates.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|}
\hline
Gate & CNOT & SWAP \\
\hline
\makecell{(a) Cycle basis}
&\makecell{ $l_{II},l_{ZI},l_{IX},l_{ZX},
l_{XZ},l_{YY},l_{XY},l_{YZ},$ \\ $l_{IZ}+l_{ZZ},l_{IY}+l_{ZY},l_{IZ}+l_{ZY},$\\$l_{XI}+l_{XX},l_{YI}+l_{YX},l_{XI}+l_{YX}$ } & \makecell{$l_{II},l_{XX},l_{XY},l_{XZ},l_{YX},l_{YY},l_{YZ},l_{ZX},l_{ZY},$\\$l_{ZZ},l_{IX}+l_{XI},l_{IY}+l_{YI},l_{IZ}+l_{ZI},$\\$l_{XI}+l_{IY},l_{XI}+l_{IZ}$}
\\ \hline
\makecell{(b) Learnable\\ Pauli fidelities}
&\makecell{ $\lambda_{II},\lambda_{ZI},\lambda_{IX},\lambda_{ZX},
\lambda_{XZ},\lambda_{YY},\lambda_{XY},\lambda_{YZ},$
\\$\lambda_{IZ}\cdot\lambda_{ZZ},\lambda_{IY}\cdot\lambda_{ZY},\lambda_{IZ}\cdot\lambda_{ZY},$\\$\lambda_{XI}\cdot\lambda_{XX},\lambda_{YI}\cdot\lambda_{YX},\lambda_{XI}\cdot\lambda_{YX}$} &\makecell{$\lambda_{II},\lambda_{XX},\lambda_{XY},\lambda_{XZ},\lambda_{YX},\lambda_{YY},\lambda_{YZ},\lambda_{ZX},\lambda_{ZY},$\\$\lambda_{ZZ},\lambda_{IX}\cdot\lambda_{XI},\lambda_{IY}\cdot\lambda_{YI},\lambda_{IZ}\cdot\lambda_{ZI},$\\$\lambda_{XI}\cdot\lambda_{IY},\lambda_{XI}\cdot\lambda_{IZ}$}
\\ \hline
\makecell{(c) Learnable\\ Pauli errors}
&\makecell{ $p_{II},p_{ZI},p_{IX},p_{ZX},
p_{XZ},p_{YY},p_{XY},p_{YZ},$
\\$p_{IZ}+p_{ZZ},p_{IY}+p_{ZY},p_{IZ}+p_{ZY},$\\$p_{XI}+p_{XX},p_{YI}+p_{YX},p_{XI}+p_{YX}$} &\makecell{$p_{II},p_{XX},p_{XY},p_{XZ},p_{YX},p_{YY},p_{YZ},p_{ZX},p_{ZY},$\\$p_{ZZ},p_{IX}+p_{XI},p_{IY}+p_{YI},p_{IZ}+p_{ZI},$\\$p_{XI}+p_{IY},p_{XI}+p_{IZ}$} \\
\hline
\makecell{(d) Unlearnable\\ degrees of freedom}
& $\lambda_{XI},\lambda_{IZ}$ & $\lambda_{XI}$ \\
\hline
\end{tabular}
\caption{A complete basis for the learnable linear functions of log Pauli fidelities and Pauli error rates for a single CNOT/SWAP gate.
}
\label{tab:main:CNOT_full}
\end{table}
Finally, the learnability of Pauli errors can be determined by the learnability of Pauli fidelities according to the Walsh-Hadamard transform $p_a = \frac{1}{4^n}\sum_{b\in{{\sf P}^n}}\lambda_b(-1)^\expval{a,b}$. An issue here is that Pauli errors are linear functions of $\{\lambda_b\}$ instead of $\{\log \lambda_b\}$. Here we make a standard assumption in the literature~\cite{erhard2019characterizing,flammia2020efficient} that the total Pauli error is sufficiently small. In this case all individual Pauli errors are close to 0 while all individual Pauli fidelities are close to 1. Therefore the Pauli errors can be estimated via
\begin{equation}\label{eq:mainapproximation}
p_a = \frac{1}{4^n}\sum_{b\in{{\sf P}^n}}\lambda_b(-1)^\expval{a,b}\approx\frac{1}{4^n}\sum_{b\in{{\sf P}^n}}(-1)^\expval{a,b}\left(1+\log\lambda_b\right),
\end{equation}
which means that their learnability can be determined by Theorem~\ref{thm:mainpaulilearnability}. In fact it has been suggested~\cite{nielsen2022first} that any function of Pauli fidelities can be estimated in this way (as a linear function of log Pauli fidelities) up to a first-order approximation, which means that the learnability of any function of Pauli fidelities can be determined by Theorem~\ref{thm:mainpaulilearnability}. In Table~\ref{tab:main:CNOT_full} (c) we show the learnable Pauli errors for CNOT and SWAP, where ``learnable'' is in an approximate sense up to Eq.~\eqref{eq:mainapproximation}.
Interestingly, for these two gates, the learnable functions of Pauli errors have the same form as the cycle basis, \textit{i.e.} the cycle space is invariant under Walsh-Hadamard transform.
We calculate the learnable Pauli errors for up to 4-qubit random Clifford gates and this seems to be true in general.
We leave a rigorous investigation into this phenomenon for future work.
| 3,545 | 39,025 |
en
|
train
|
0.4975.4
|
\subsection{Experiments on IBM Quantum hardware}
We demonstrate our theory on IBM quantum hardware~\cite{ibmquantum} using a minimal example -- characterizing the noise channel of a CNOT gate. In our experiments both the gate noise and SPAM noise are twirled into Pauli noise using randomized compiling. In the following we show how to extract all learnable information of Pauli noise SPAM-robustly, and also attempt to estimate the unlearnable degrees of freedom by making additional assumptions.
First, we conduct two types of cycle benchmarking (CB) experiments, the standard CB and CB with interleaving single-qubit gates (called \emph{interleaved CB}), as shown in Fig.~\ref{fig:main_cb}.
The results are shown in Fig.~\ref{fig:main_exp_cbraw}. Here a set of two Pauli labels in the $x$-axis (\textit{e.g.}, $\{IZ,ZZ\}$) corresponds to the geometric mean of the Pauli fidelity (\textit{e.g.}, $\sqrt{\lambda_{IZ}\lambda_{ZZ}}$).
Comparing to Table~\ref{tab:main:CNOT_full}, we see that all learnable information of Pauli fidelities (including learnable individual and 2-product) are successfully extracted.
Also note from Fig.~\ref{fig:main_exp_cbraw} that the two types of CB experiments give consistent estimates, in terms of both the process fidelity and individual Pauli fidelities (\textit{e.g.}, $\sqrt{\lambda_{XZ}\lambda_{YY}}$ estimated from standard CB is consistent with $\lambda_{XZ}$ and $\lambda_{YY}$ from interleaved CB).
\begin{figure}
\caption{Estimates of Pauli fidelities of IBM's CNOT gate via standard CB (left) and CB with interleaved gates (right), using circuits shown in Fig.~\ref{fig:main_cb}
\label{fig:main_exp_cbraw}
\end{figure}
We have shown that all 13 learnable degrees of freedom (excluding the trivial $\lambda_{II}=1$) are extracted in Fig.~\ref{fig:main_exp_cbraw} by comparing with Table~\ref{tab:main:CNOT_full}, and there remain 2 unlearnable degrees of freedom. We can bound the feasible region of the 2 unlearnable degrees of freedom using physical constraints, \textit{i.e.}, the reconstructed Pauli noise channel must be completely positive.
This is equivalent to requiring $p_a\ge 0$ for all Pauli error rates $p_a$.
We choose $\lambda_{XX}$ and $\lambda_{ZZ}$ as a representation of the unlearnable degrees of freedom, and plot the calculated feasible region in Fig.~\ref{fig:main_exp_cbfeasible} (a), which happens to be a rectangular area. We also calculate the feasible region for each unlearnable Pauli fidelity and Pauli error rate, which are presented in Fig.~\ref{fig:main_exp_cbfeasible} (b), (c). In particular, we choose two extreme points (blue and green dots in Fig.~\ref{fig:main_exp_cbfeasible} (a)) in the feasible region and plot the corresponding noise model in Fig.~\ref{fig:main_exp_cbfeasible} (b), (c). Note that the (approximately) learnable Pauli error rates (on the left of the red vertical dashed line) are nearly invariant under change of gauge degrees of freedom, but they can be estimated to be negative due to statistical fluctuation. Thus, when we calculate the physical constraints, we only require those unlearnable Pauli error rates (on the right of the red vertical dashed line) to be non-negative.
\begin{figure}
\caption{Feasible region of the learned Pauli noise model, using data from Fig.~\ref{fig:main_exp_cbraw}
\label{fig:main_exp_cbfeasible}
\end{figure}
Next, we explore an approach to estimate the unlearnable information with additional assumptions.
Suppose that one can prepare $\ket{0}^{\otimes n}$ perfectly.
Since we assume noiseless single-qubit gates, this means we can prepare a set of perfect tomographically complete states $\{\ket{0/1},\ket{\pm},\ket{\pm i}\}$.
In this case, all the unlearnable degrees of freedom become learnable, as one can first perform a measurement device tomography, and then directly estimate the process matrix of a noisy gate with measurement error mitigated~\cite{Maciejewski2020mitigationofreadout}.
Following this general idea, we propose a variant of cycle benchmarking for Pauli noise characterization, which we call \emph{intercept CB} as it uses the information of intercept in a standard cycle benchmarking protocol. Given an $n$-qubit Clifford gate $\mathcal G$, let $m_0$ be the smallest positive integer such that $\mathcal G^{m_0} = \mathcal I$.
For any Pauli fidelity $\lambda_a$ (regardless of whether learnable or not according to Theorem~\ref{thm:mainpaulifidelity}), consider the following two CB experiments using the standard circuit as in Fig.~\ref{fig:main_cb} (a). First, prepare an eigenstate of $P_a$, run CB with depth $l m_0+1$ for some non-negative integer $l$, and estimate the expectation value of $P_b\mathrel{\mathop:}\nobreak\mkern-1.2mu=\mathcal G( P_a )$. The result equals
\begin{equation}
{\mathop{\mbb{E}}}\expval{P_b}_{l m_0+1}=\lambda^S_{P_a}\lambda^M_{P_b}\lambda_{a}\left(\prod_{k=1}^{m_0}\lambda_{\mathcal G^k(P_a)}\right)^l,
\end{equation}
where $\lambda_{P_{a/b}}^{S/M}$ is the Pauli fidelity of the state preparation and measurement noise channel, respectively (earlier we have absorbed these two coefficients into a single coefficient $A$ for simplicity). Second, prepare an eigenstate of $P_b$, run CB with depth $l m_0$, and estimate the expectation value of $P_b$. The result equals
\begin{equation}
{\mathop{\mbb{E}}}\expval{P_b}_{l m_0}=\lambda^S_{P_b}\lambda^M_{P_b}\left(\prod_{k=1}^{m_0}\lambda_{\mathcal G^k(P_a)}\right)^l.
\end{equation}
By fitting both ${\mathop{\mbb{E}}}\expval{P_b}_{l m_0+1}$ and ${\mathop{\mbb{E}}}\expval{P_b}_{l m_0}$ as exponential decays in $l$, extracting the intercepts (function values at $l=0$), and taking the ratio,
we obtain an estimator $\widehat{\lambda}^{\text{ICB}}_a$
that is asymptotically unbiased to $\lambda_{a}\cdot{\lambda^{S}_{P_a}}/{\lambda^{S}_{P_b}}$.
This estimator is robust against measurement noise. Note that $\lambda^S_{P_a}=\lambda^S_{P_b}=1$ if we assume perfect initial state preparation, and in this case the above shows that $\lambda_a$ is learnable, and thus the entire Pauli noise channel is learnable.
We note that, instead of fitting an exponential decay in $l$, one could in principle just take $l=0$ and estimate the ratio of ${\mathop{\mbb{E}}}\expval{P_b}_{0}$ and ${\mathop{\mbb{E}}}\expval{P_b}_{1}$, which also yields a consistent estimate for $\lambda_a\cdot\lambda^S_{P_a}/\lambda^S_{P_b}$.
If one has already obtained all the learnable information from previous experiments, this could be a more efficient approach.
However, if one has not done those experiments, the intercept CB with multiple depths can estimate the intercept (unlearnable information) and slope (learnable information) simultaneously, which is more sample efficient.
We numerically simulate intercept CB for characterizing the CNOT gate under different state preparation (SP) and measurement (M) noise. As shown in Fig.~\ref{fig:main_sim_intercept}, this method yields relatively precise estimate when there is only measurement noise even if the noise is orders of magnitude stronger than the gate noise, but will have large deviation from the true noise model even under small state preparation noise. We refer the reader to Supplementary Section III for more details about the numerical simulation.
Finally, we experimentally implement intercept CB to estimate $\lambda_{XX}$ and $\lambda_{ZZ}$, which are the two unlearnable degrees of freedom of CNOT, allowing us to determine all the Pauli fidelities and Pauli error rates.
One challenge in interpreting the results is that we do not know in general whether the low SP noise assumption holds, therefore it is unclear if the learned results should be trusted.
However, for the estimate to be correct, it should at least lie in the physically feasible region we obtained earlier in Fig.~\ref{fig:main_exp_cbfeasible}.
In Fig.~\ref{fig:main_exp_intercept}, we present our experimental results of intercept CB. It turns out that certain Pauli fidelities are far away from the physical region by several standard deviations.
This gives strong evidence that the low SP noise assumption was \emph{not} true on the platform we used.
The data collected here can further be used to give a lower bound for the SP noise. Suppose we obtain the physical region of $\lambda_a$ to be $[\widehat{\lambda}_{a,\mathrm{min}},\widehat{\lambda}_{a,\mathrm{max}}]$. Combining with the expression of intercept CB, we have
\begin{equation}
{\widehat{\lambda}^\mathrm{ICB}_a}/{\widehat{\lambda}_{a,\mathrm{max}}}\le{\lambda^S_{P_a}}/{\lambda^S_{P_b}}\le{\widehat{\lambda}^\mathrm{ICB}_a}/{\widehat{\lambda}_{a,\mathrm{min}}}.
\end{equation}
Applying this to the data of $IZ$ and $ZZ$ in Fig.~\ref{fig:main_exp_intercept} (a), we have $\lambda^S_{IZ}/\lambda^S_{ZZ} \le 0.9879(23)$.
If we make a physical assumption that the state preparation noise is a random bit-flip during the qubit initialization, one can conclude the bit-flip rate on the first qubit is lower bounded by $0.61(12)\%$.
One can in principle bound the bit-flip rate on the second qubit by looking at $\lambda^S_{XX}/\lambda^S_{XI}$. Unfortunately, our estimate of $\lambda^S_{XX}$ from intercept CB falls in the physical region within one standard deviation, so there is no nontrivial lower bound. One could expect obtaining a useful lower bound by looking at a CNOT gate with reversed control and target.
The lower bound of SP noise obtained here is completely independent of the measurement noise and does not suffer from the issue of gauge freedom~\cite{nielsen2021gate}, as long as all of our noise assumptions are valid, \textit{i.e.}, there is no significant contribution from time non-stationary, non-Markovian, or single-qubit gate-dependent noise.
\begin{figure}
\caption{Simulation of intercept CB on CNOT under different SPAM noise rate. The simulated noise channel is a $2$-qubit amplitude damping channel with effective noise rate $5\%$, and SPAM noise are modeled as bit-flip errors. For the blue (green) lines, we introduce random bit-flip errors to the measurement (state preparation). The solid lines show the $l_1$-distance of the estimated Pauli fidelities from the true Pauli fidelities. The solid lines show the $l_1$-distance of the (individually) learnable Pauli fidelities from the ground truth.
}
\label{fig:main_sim_intercept}
\end{figure}
\begin{figure}
\caption{The learned Pauli noise model using intercept CB. The feasible region (blue bars) are taken from Fig.~\ref{fig:main_exp_cbfeasible}
\label{fig:main_exp_intercept}
\end{figure}
| 2,907 | 39,025 |
en
|
train
|
0.4975.5
|
\section{Discussion}
We have shown how to characterize the learnability of Pauli noise of Clifford gates and discussed a method to extract unlearnable information by assuming perfect initial state preparation. It is also interesting to consider other physically motivated assumptions on the noise model to avoid unlearnability. For example, we can write down a parameterization of the noise model based on the underlying physical mechanism which may have fewer than $4^n$ parameters. The main issue here is that these assumptions are highly platform-dependent and should be decided case-by-case. Moreover, it is unclear to what extent should the learned results be trusted when additional assumptions are made, since in general we cannot test whether the assumptions hold due to unlearnability.
Another direction to overcome the unlearnability is to change the model of quantum experiments. Here we have been working with the standard model as in gate set tomography, where a quantum measurement decoheres the system and only outputs classical information. However, some platforms might support quantum non-demolition (QND) measurements, and in this case measurements can be applied repeatedly, which could potentially allow more information to be learned~\cite{laflamme2022algorithmic}.
Recently, Ref.~\cite{huang2022foundations} considered similar issues of noise learnability. They studied a different Pauli noise model with perfect initial state $\ket{0}$, perfect computational basis measurement, and noisy single qubit gates, and showed the existence of unlearnable information. In contrast, here we focus on the learnability of Pauli noise of multi-qubit Clifford gates assuming perfect single-qubit gates (with noisy SPAM), and in practice we make the standard assumption that noise on single-qubit gates is gate-independent (\textit{e.g.}~\cite[Sec. II A]{Ferracin2022Efficiently}), in which case our noise learning results are interpreted as characterizing a dressed cycle.
This work leaves open the question of noise learnability for non-Clifford gates. An issue here is that randomized compiling is not known to work with non-Clifford gates in general, so it is unclear if the general CPTP noise learnability problem can be reduced to Pauli noise. Recent work~\cite{liu2021benchmarking} shows that random quantum circuits can effectively twirl the CPTP noise channel into Pauli noise and can be used to learn the total Pauli error. The question of whether more information can be learned still remains open.
Another issue to address is the scalability in noise learning. It is impossible to estimate all learnable degrees of freedom efficiently as there are exponentially many of them (an exponential lower bound on the sample complexity is shown in~\cite{chen2022quantum}). One way to avoid the exponential scaling issue is to assume the noise model has certain special structure (such as sparsity or low-weight) such that the noise model only has polynomially many parameters~\cite{harper2020efficient,harper2021fast,flammia2020efficient,berg2022probabilistic}. It is an interesting open direction to study the characterization of learnability under these assumptions, and we give some related discussions in Supplementary Section II D.
\section*{Data availability}
The data generated in this study is available at \url{https://github.com/csenrui/Pauli_Learnability}
\section*{Code availability}
The code that supports the findings of this study is available at \url{https://github.com/csenrui/Pauli_Learnability}
\begin{acknowledgments}
We thank Ewout van den Berg, Arnaud Carignan-Dugas, Robert Huang, Kristan Temme and Pei Zeng for helpful discussions.
We thank the anonymous reviewer \#2 for suggesting an alternative approach to intercept cycle benchmarking.
S.C. and L.J. acknowledge support from the ARO (W911NF-18-1-0020, W911NF-18-1-0212), ARO MURI (W911NF-16-1-0349, W911NF-21-1-0325), AFOSR MURI (FA9550-19-1-0399, FA9550-21-1-0209), AFRL (FA8649-21-P-0781), DoE Q-NEXT, NSF (OMA-1936118, EEC-1941583, OMA-2137642), NTT Research, and the Packard Foundation (2020-71479).
Y.L. was supported by DOE NQISRC QSA grant \#FP00010905, Vannevar Bush faculty fellowship N00014-17-1-3025, MURI Grant FA9550-18-1-0161 and NSF award DMR-1747426. A.S. is supported by a Chicago Prize Postdoctoral Fellowship in Theoretical Quantum Science. B.F. acknowledges support from AFOSR (YIP number FA9550-18-1-0148 and FA9550-21-1-0008). This material is based upon work partially supported by the National Science Foundation under Grant CCF-2044923 (CAREER) and by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
\end{acknowledgments}
\section*{Author contributions}
S.C. and Y.L. developed the theory and performed the experiments. B.F. and L.J. supervised the project. All authors contributed important ideas during initial discussions and contributed to writing the manuscript.
\section*{Competing interests}
The authors declare no competing interests.
\begin{appendix}
\date{\today}
\maketitle
\tableofcontents
\section{Preliminaries}\label{sec:pre}
Define ${\sf P}^n$ to be the $n$-qubit Pauli group modulo its center.
We can label any Pauli operator $P_a\in{\sf P}^n$ with a $2n$-bit string $a$.
Specifically, we define $P_{\bm 0}$ to be the identity operator $I$.
We will use the notations $P_a$ and $a$ interchangeably when there is no confusion.
The \emph{pattern} of an $n$-qubit Pauli operator $P_a$, denoted as $\mathrm{pt}(P_a)$, is an $n$-bit string that takes $0$ at the $j$th bit if $P_a$ equals to $I$ at the $j$th qubit and takes $1$ otherwise.
For example, $\mathrm{pt}(XYIZI)=\mathrm{pt}(XXIXI)=11010$.
An $n$-qubit \emph{Pauli diagonal map} $\Lambda$ is a linear map of the following form
\begin{equation}
\Lambda(\cdot) = \sum_{a\in{{\sf P}^n}}p_a P_a(\cdot)P_a,
\end{equation}
where $\bm{p}\mathrel{\mathop:}\nobreak\mkern-1.2mu= \{p_a\}_a$ are called the \emph{Pauli error rates}.
If $\Lambda$ is further a CPTP map, which corresponds to the condition $p_a\ge 0$ and $\sum_a p_a = 1$, then it is called a \emph{Pauli channel}.
An important property of Pauli diagonal maps is that their eigen-operators are exactly the $4^n$ Pauli operators. Thus, an alternative expression for $\Lambda$ is
\begin{equation}
\Lambda(\cdot) = \frac{1}{2^n}\sum_{b\in{{\sf P}^n}}\lambda_b\Tr(P_b(\cdot))P_b,
\end{equation}
where $\bm\lambda \mathrel{\mathop:}\nobreak\mkern-1.2mu= \{\lambda_b\}_b$ are called the \emph{Pauli fidelities} or \emph{Pauli eigenvalues} ~\cite{flammia2020efficient,flammia2021pauli,chen2020robust}.
These two sets of parameters, $\bm p$ and $\bm \lambda$, are related by the Walsh-Hadamard transform
\begin{equation}
\begin{aligned}
\lambda_b = \sum_{a\in{{\sf P}^n}}p_a(-1)^\expval{a,b},\quad
p_a = \frac{1}{4^n}\sum_{b\in{{\sf P}^n}}\lambda_b(-1)^\expval{a,b},
\end{aligned}
\end{equation}
where $\expval{a,b}$ equals to $0$ if $P_a,P_b$ commute and equals to $1$ otherwise.
For a general linear map $\mathcal E$, define its \emph{Pauli twirl} as
\begin{equation}
\mathcal E^{P}\mathrel{\mathop:}\nobreak\mkern-1.2mu= \sum_{a\in{{\sf P}^n}} \mathcal P_a\mathcal E \mathcal P_a.
\end{equation}
Here we use the calligraphic $\mathcal P_a$ to represent the unitary channel of Pauli gate $P_a$, $\mathcal P_a(\cdot) := P_a(\cdot)P_a$.
The Pauli twirl of any linear map (quantum channel) is a Pauli diagonal map (Pauli channel). When we talk about the Pauli fidelities of a non-Pauli channel, we are effectively referring to the Pauli fidelities of its Pauli twirl.
| 2,400 | 39,025 |
en
|
train
|
0.4975.6
|
\section{Theory on the learnability of Pauli noise}
In this section, we give a precise characterization of what information in the Pauli noise channel associated with Clifford gates can be learned in the presence of state-preparation-and-measurement (SPAM) noise.
Our results show that certain Pauli fidelities of a noisy multi-qubit Clifford gate cannot be learned in a SPAM-robust manner, even with the assumption that single-qubit gates can be perfectly implemented.
The proof is related to the notion of \emph{gauge freedom} in the literature of gate set tomography~\cite{nielsen2021gate}.
We note that the results presented in this section emphasizes on the no-go part, \textit{i.e.}, some information about the Pauli noise is (SPAM-robustly) unlearnable even with many favorable assumptions on the experimental conditions.
As shown in the main text, the learnable information about Pauli noise can be extracted in a much more practical setting using cycle benchmarking~\cite{erhard2019characterizing} and its variant.
\subsection{Assumptions and definitions}\label{sec:noisemodelAssumptions}
We focus on an $n$-qubit quantum system. Below are our assumptions on the noise model.
\begin{itemize}
\item \textbf{Assumption 1.} All single qubit unitary operation can be perfectly implemented.
\item \textbf{Assumption 2.} A set of multi-qubit Clifford gates $\mathfrak G \mathrel{\mathop:}\nobreak\mkern-1.2mu= \{\mathcal G\}$ can be implemented and are subject to gate-dependent Pauli noise, \textit{i.e.}, $\widetilde{\mathcal G} = \mathcal G\circ \Lambda_{\mathcal G}$ where $\Lambda_\mathcal G$ is some $n$-qubit Pauli channel.
\item \textbf{Assumption 3.} Any state preparation and measurement can be implemented, up to some fixed Pauli noise channel $\mathcal E^S$ and $\mathcal E^M$, respectively.
\item\textbf{Assumption 4.} The Pauli noise channels appearing in the above assumptions satisfy that all Pauli fidelities and Pauli error rates are strictly positive.
\end{itemize}
Assumption 1 is motivated by the fact that the noise of single-qubit gates are usually much smaller than that of multi-qubit gates on today's hardware. Such approximation is widely adopted in the literature~\cite{erhard2019characterizing,wallman2016noise} with slight modifications.
In Assumption 2, we view every Clifford gate as an $n$-qubit gate, and allow the noise to be $n$-qubit. This means we are taking all crosstalk into account.
A Clifford gate acting on a different (ordered) subset of qubits is viewed as a different gate and can thus have a different noise channel (\textit{e.g.}, CNOT$_{12}$, CNOT$_{21}$, CNOT$_{23}$ have different noise channels.)
We will discuss the no-crosstalk situation in Sec.~\ref{sec:no_crosstalk}.
The rationale for assuming Pauli noise in Assumption 2 and 3 is that we can always use randomized compiling~\cite{wallman2016noise,hashim2020randomized} to tailor general noise into Pauli channels.
Finally, Assumption 4 is mostly for technical convenience. The requirement of positive Pauli error rates roughly implies the Pauli channels are at the interior of the CPTP polytope, and will be useful later in constructing valid gauge transformations. The requirement of positive Pauli fidelities is also reasonable for any physically interesting noise model.
Specifying a Clifford gate set $\mathfrak G$, a \emph{noise model} satisfying our assumptions is determined by the Pauli channels describing gate noise and SPAM noise.
We can thus view a noise model as a collection of Pauli fidelities, denoted as $\mathcal N = \{\mathcal E^S,\mathcal E^M,\Lambda\}$, where $\mathcal E^{S/M} = \{\lambda_a^{S/M}\}_a$ describes the SPAM noise and $\Lambda = \{\lambda_a^{\mathcal {G}}\}_{a,\mathcal G}$ describes the gate noise.
We note that this is an example of \emph{parametrized gate set} in the language of gate set tomography~\cite{nielsen2021gate}.
In order to gain information about an unknown noise model, one needs to conduct \emph{experiments}. In the circuit model, any experiment can be described by some state preparation, a sequence of quantum gates, and some POVM measurements.
An experiment conducted with different underlying noise model would yield different measurement outcome distributions.
Explicitly, consider an (ideal) experiment with initial state $\rho_0$, gate sequence $\mathcal C$, POVM measurements $\{E_o\}_o$. Denote the noisy implementation of these objects within a certain noise model $\mathcal N$ with a tilde. Then the experiment effectively maps $\mathcal N$ to a probability distribution $p_{\mathcal N}(o) = \Tr(\widetilde E_o(\widetilde{\mathcal C}(\widetilde \rho_0)))$.
We call two noise models $\mathcal N_1$, $\mathcal N_2$ \emph{indistinguishable} if for all possible experiments we have $p_{\mathcal N_1}=p_{\mathcal N_2}$, and distinguishable otherwise.
\begin{definition}[Learnable and unlearnable function]\label{de:learnability}
A function $f$ of noise models is learnable if
\begin{equation}
f(\mathcal N_1)\ne f(\mathcal N_2) \implies \mathcal N_1, \mathcal N_2~\text{are distinguishable},
\end{equation}
for any noise models $\mathcal N_1$, $\mathcal N_2$.
In contrast, $f$ is unlearnable if there exist indistinguishable noise models $\mathcal N_1$, $\mathcal N_2$ such that $f(\mathcal N_1)\ne f(\mathcal N_2)$.
\end{definition}
Note that the above definition of ``learnable'' does not necessarily mean that the value of the function can be learned. However, throughout this paper whenever some function is ``learnable'' according to Definition~\ref{de:learnability}, it is also learnable in the stronger sense that we can design an experiment to estimate it up to arbitrarily small error with high success probability.
In the language of gate set tomography, an unlearnable function is a \emph{gauge-dependent} quantity of the gate set~\cite{nielsen2021gate}.
On the other hand, any learnable function can in principle be learned to arbitrary precision.
In the following, we will focus the learnability of the functions of the gate noise, including individual and multiplicative combinations of Pauli fidelities.
| 1,655 | 39,025 |
en
|
train
|
0.4975.7
|
\subsection{Learnability of individual Pauli fidelity}
We first study the learnability of individual Pauli fidelities associated with a Clifford gate. This has been an open problem in recent study of quantum benchmarking.
Perhaps surprisingly, we obtain the following simple criteria on the learnability of Pauli fidelities with any Clifford gate.
\begin{theorem}\label{th:nogo}
With Assumptions 1-4, for any $n$-qubit Clifford gate $\mathcal G$ and Pauli operator $P_a$, the Pauli fidelity $\lambda_a^{\mathcal G}$ is unlearnable if and only if $\mathcal G$ changes the pattern of $P_a$, \textit{i.e.,} $\mathrm{pt}(\mathcal G(P_a))\ne \mathrm{pt}(P_a)$.
\end{theorem}
The fact that certain Pauli fidelities are SPAM-robustly unlearnable is observed in some recent works~\cite{erhard2019characterizing,hashim2020randomized,berg2022probabilistic,Ferracin2022Efficiently}, described as ``degeneracy'' of the noise model. Our work is the first to give a rigorous argument for this by establishing connections to gate set tomography.
As an example, for the CNOT and SWAP gates, we can immediately list its learnable and unlearnable Pauli fidelities in Table~\ref{tab:cnot_swap_individual}.
We note that, the no-go theorem holds even under the no-crosstalk assumption as will be discussed in Sec.~\ref{sec:no_crosstalk},
so introducing ancillary qubits or other multi-qubit Clifford gates cannot help resolve the unlearnability.
\begin{table}[!htp]
\centering
\begin{tabular}{|c|c|c|}
\hline
Gate & Learnable & Unlearnable \\
\hline
CNOT&$\lambda_{II},\lambda_{ZI},\lambda_{IX},\lambda_{ZX},
\lambda_{XZ},\lambda_{YY},\lambda_{XY},\lambda_{YZ}$ &
$\lambda_{IZ},\lambda_{XI},\lambda_{ZZ},\lambda_{XX},
\lambda_{IY},\lambda_{YI},\lambda_{ZY},\lambda_{YX}$\\
\hline
SWAP&$\lambda_{II},\lambda_{XX},\lambda_{XY},\lambda_{XZ},
\lambda_{YX},\lambda_{YY},\lambda_{YZ},\lambda_{ZX},\lambda_{ZY},\lambda_{ZZ}$ &
$\lambda_{IX},\lambda_{IY},\lambda_{IZ},
\lambda_{XI},\lambda_{YI},\lambda_{ZI}$\\
\hline
\end{tabular}
\caption{Learnability of individual Pauli fidelity of CNOT and SWAP.}
\label{tab:cnot_swap_individual}
\end{table}
Before going into the proof, we make several remarks about Theorem~\ref{th:nogo}.
The correct interpretation of the no-go result in Theorem~\ref{th:nogo} is that certain Pauli fidelities cannot be learned in a fully SPAM-robust manner.
If one has some pre-knowledge that the SPAM noise is much weaker than the gate noise, there exist methods to give a pretty good estimate of those unlearnable Pauli fidelities, according to physical constraints.
See the discussions in the main text.
On the other hand, it is observed that the product of certain unlearnable Pauli fidelities can be learned in a SPAM-robust manner, such as $\lambda_{XI}\cdot\lambda_{XX}$ for the CNOT gate~\cite{erhard2019characterizing}. We will characterize the learnability of this kind of products of Pauli fidelities in the next subsection.
\begin{proof}[Proof of Theorem~\ref{th:nogo}]
We start with the ``only if'' part, which is equivalent to saying that $\mathrm{pt}(P_a)=\mathrm{pt}(\mathcal G(P_a))$ implies $\lambda_a^{\mathcal G}$ being learnable.
The condition $\mathrm{pt}(\mathcal G(P_a))=\mathrm{pt}(P_a)$ implies $\mathcal G(P_a)$ is equivalent to $P_a$ up to some local unitary transformation, \textit{i.e.}, there exists a product of single-qubit unitary gates $\mathcal U \mathrel{\mathop:}\nobreak\mkern-1.2mu= \bigotimes_{j=1}^n\mathcal U_j$ such that
\begin{equation}
\mathcal U\circ\mathcal G(P_a) = P_a.
\end{equation}
Now we design the following experiments parameterized by a positive integer $m$,
\begin{itemize}
\item Initial state: $\rho_0 = (I+P_a)/2^n$,
\item POVM measurement: $E_{\pm 1} = (I\pm P_a)/{2}$,
\item Circuit: $\mathcal C^m = \left(\mathcal U\circ\mathcal G\right)^m$.
\end{itemize}
Consider the measurement probability by running these experiments within a noise model $\mathcal N$.
\begin{equation}
\begin{aligned}
p^{(m)}_{\pm 1}(\mathcal N) &= \Tr\left( \widetilde{E}_{\pm 1} \widetilde{\mathcal C}^m (\widetilde{\rho}_0)\right) \\
&= \Tr \left( \frac{I\pm P_a}{2} \cdot\left( \mathcal E^M \circ \left(
\mathcal U\circ\mathcal G
\right)^m\circ\mathcal E^S \right)
\left(\frac{I+P_a}{2^n}\right) \right)\\
&= \Tr\left(\frac{I\pm P_a}{2} \cdot\frac{I+\lambda^M_{a}
\left(\lambda_a^{\mathcal G}\right)^m
\lambda^S_{a}P_a}{2^n} \right)\\
&= \frac{1\pm\lambda^M_{a}
\left(\lambda_a^{\mathcal G}\right)^m
\lambda^S_{a}}{2}.
\end{aligned}
\end{equation}
Recall that $\lambda_a^{S/M}$ is the Pauli fidelity of the SPAM noise channel for $P_a$.
The expectation value is
\begin{equation}
\mathbfb E^{(m)}(\mathcal N) = \lambda^M_{a}
\left(\lambda_a^{\mathcal G}\right)^m
\lambda^S_{a}.
\end{equation}
If we take the ratio of expectation values of two experiments with consecutive $m$, we obtain (recall that all these Pauli fidelities are strictly positive by Assumption 4)
\begin{equation}
\mathbfb E^{m+1}(\mathcal N)/\mathbfb E^{m}(\mathcal N) = \lambda_a^{\mathcal G}.
\end{equation}
This implies that if two noise model assign different values for $\lambda_a^{\mathcal G}$, the above experiments would be able to distinguish between them. By definition~\ref{de:learnability}, we conclude $\lambda_a^{\mathcal G}$ is learnable.
Next we prove the ``if'' part. Fix an $n$-qubit Clifford gate $\mathcal G$.
Let $P_a$ be any Pauli operator such that $\mathrm{pt}(\mathcal G(P_a))\neq \mathrm{pt}(P_a)$. We will show that $\lambda_a^{\mathcal G}$ is unlearnable by explicitly constructing indistinguishable noise models that assign different values to $\lambda_a^{\mathcal G}$.
Recall that any experiment involves a noisy initial state $\tilde{\rho}_0$, a noisy measurement $\{\widetilde{E}_l\}_l$, and a quantum circuit consisting of noiseless single-qubit gates $\mathcal U \mathrel{\mathop:}\nobreak\mkern-1.2mu= \bigotimes_{j=1}^n\mathcal U_j$ and noisy multi-qubit Clifford gates $\widetilde{\mathcal T}$.
Now, introduce an invertible linear map $\mathcal M:\mathcal L(\mathcal H_{2^n})\to \mathcal L(\mathcal H_{2^n})$, and consider the following transformation
\begin{equation}\label{eq:gauge_trans}
\begin{aligned} &\widetilde{\rho}_0\mapsto \mathcal M(\widetilde{\rho}_0),\quad
\widetilde{E}_l\mapsto (\mathcal M^{-1})^\dagger (\widetilde{E}_l),\\
&\bigotimes_{j=1}^n\mathcal U_j \mapsto \mathcal M\circ\bigotimes_{j=1}^n\mathcal U_j\circ\mathcal M^{-1},\\
&\widetilde{\mathcal T} \mapsto \mathcal M\circ \widetilde{\mathcal T}\circ\mathcal M^{-1}.
\end{aligned}
\end{equation}
One can immediately see that any measurement outcome distribution $p_l\mathrel{\mathop:}\nobreak\mkern-1.2mu=\Tr(\widetilde{E}_l\widetilde{\mathcal C}(\widetilde{\rho}_0))$ remains unchanged via such transformation. Therefore the noise models related by this transformation are indistinguishable. This is called a \emph{gauge transformation} in the literature of gate set tomography~\cite{nielsen2021gate}.
To use this idea for the proof, we start with a noise model $\mathcal N$ and construct a map $\mathcal M$ such that
\begin{enumerate}
\item The transformation yields a physical noise model $\mathcal N'$ satisfying Assumptions 1-5 in Sec.~\ref{sec:noisemodelAssumptions}.
\item The two noise models $\mathcal N$, $\mathcal N'$ assign different values to $\lambda_a^{\mathcal G}$.
\end{enumerate}
Starting with a generic noise model $\mathcal N = \{\mathcal E^S,\mathcal E^M,\Lambda\}$ satisfying the assumptions,
we construct the gauge transform map $\mathcal M$ as follows.
Since $\mathrm{pt}(\mathcal G(P_a))\ne \mathrm{pt}(P_a)$, there exists an index $i\in[k]$ such that one and only one of $(P_a)_i$ and $\mathcal G(P_a)_i$ equals to $I$. Let $\mathcal M$ be the single-qubit depolarizing channel on the $i$-th qubit,
\begin{equation}\label{eq:dep_trans}
\mathcal M\mathrel{\mathop:}\nobreak\mkern-1.2mu=\mathcal D_{i}\otimes\mathcal I_{[n]\backslash i},
\end{equation}
where the single-qubit depolarizing channel is defined as
\begin{equation}
\forall P\in\{I,X,Y,Z\},\quad \mathcal D(P) = \left\{
\begin{aligned}
P,\quad&~\text{if}~ P=I,\\
\eta P,\quad&~\text{otherwise},
\end{aligned}
\right.
\end{equation}
for some parameter $0<\eta<1$. We will specify the value of $\eta$ later.
Now we calculate the transformed noise model $\mathcal N'=\{\mathcal E^{S'},\mathcal E^{M'},\Lambda'\}$. The SPAM noise channels are transformed as
\begin{equation}\label{eq:SPAMupdate}
\mathcal E^{S'} = \mathcal M\mathcal E^S,\quad \mathcal E^{M'} = \mathcal E^M\mathcal M^{-1},
\end{equation}
both of which are still Pauli diagnoal maps. Thanks to our Assumption 4, as long as $\eta$ is sufficiently close to $1$, they can be shown to be Pauli channels.
Next, the single-qubit unitary gates are transformed as
\begin{equation}
\mathcal M \left(\bigotimes_{j=1}^n \mathcal U_j \right)\mathcal M^{-1}
=\mathcal D_i\mathcal U_i\mathcal D_i^\dagger \otimes \bigotimes_{j\ne i}\mathcal U_j
= \bigotimes_j \mathcal U_j,
\end{equation}
since the single-qubit deplorizing channel commutes with any single-qubit unitary. This implies the single-qubit unitary gates are still noiseless.
| 3,017 | 39,025 |
en
|
train
|
0.4975.8
|
Next we prove the ``if'' part. Fix an $n$-qubit Clifford gate $\mathcal G$.
Let $P_a$ be any Pauli operator such that $\mathrm{pt}(\mathcal G(P_a))\neq \mathrm{pt}(P_a)$. We will show that $\lambda_a^{\mathcal G}$ is unlearnable by explicitly constructing indistinguishable noise models that assign different values to $\lambda_a^{\mathcal G}$.
Recall that any experiment involves a noisy initial state $\tilde{\rho}_0$, a noisy measurement $\{\widetilde{E}_l\}_l$, and a quantum circuit consisting of noiseless single-qubit gates $\mathcal U \mathrel{\mathop:}\nobreak\mkern-1.2mu= \bigotimes_{j=1}^n\mathcal U_j$ and noisy multi-qubit Clifford gates $\widetilde{\mathcal T}$.
Now, introduce an invertible linear map $\mathcal M:\mathcal L(\mathcal H_{2^n})\to \mathcal L(\mathcal H_{2^n})$, and consider the following transformation
\begin{equation}\label{eq:gauge_trans}
\begin{aligned} &\widetilde{\rho}_0\mapsto \mathcal M(\widetilde{\rho}_0),\quad
\widetilde{E}_l\mapsto (\mathcal M^{-1})^\dagger (\widetilde{E}_l),\\
&\bigotimes_{j=1}^n\mathcal U_j \mapsto \mathcal M\circ\bigotimes_{j=1}^n\mathcal U_j\circ\mathcal M^{-1},\\
&\widetilde{\mathcal T} \mapsto \mathcal M\circ \widetilde{\mathcal T}\circ\mathcal M^{-1}.
\end{aligned}
\end{equation}
One can immediately see that any measurement outcome distribution $p_l\mathrel{\mathop:}\nobreak\mkern-1.2mu=\Tr(\widetilde{E}_l\widetilde{\mathcal C}(\widetilde{\rho}_0))$ remains unchanged via such transformation. Therefore the noise models related by this transformation are indistinguishable. This is called a \emph{gauge transformation} in the literature of gate set tomography~\cite{nielsen2021gate}.
To use this idea for the proof, we start with a noise model $\mathcal N$ and construct a map $\mathcal M$ such that
\begin{enumerate}
\item The transformation yields a physical noise model $\mathcal N'$ satisfying Assumptions 1-5 in Sec.~\ref{sec:noisemodelAssumptions}.
\item The two noise models $\mathcal N$, $\mathcal N'$ assign different values to $\lambda_a^{\mathcal G}$.
\end{enumerate}
Starting with a generic noise model $\mathcal N = \{\mathcal E^S,\mathcal E^M,\Lambda\}$ satisfying the assumptions,
we construct the gauge transform map $\mathcal M$ as follows.
Since $\mathrm{pt}(\mathcal G(P_a))\ne \mathrm{pt}(P_a)$, there exists an index $i\in[k]$ such that one and only one of $(P_a)_i$ and $\mathcal G(P_a)_i$ equals to $I$. Let $\mathcal M$ be the single-qubit depolarizing channel on the $i$-th qubit,
\begin{equation}\label{eq:dep_trans}
\mathcal M\mathrel{\mathop:}\nobreak\mkern-1.2mu=\mathcal D_{i}\otimes\mathcal I_{[n]\backslash i},
\end{equation}
where the single-qubit depolarizing channel is defined as
\begin{equation}
\forall P\in\{I,X,Y,Z\},\quad \mathcal D(P) = \left\{
\begin{aligned}
P,\quad&~\text{if}~ P=I,\\
\eta P,\quad&~\text{otherwise},
\end{aligned}
\right.
\end{equation}
for some parameter $0<\eta<1$. We will specify the value of $\eta$ later.
Now we calculate the transformed noise model $\mathcal N'=\{\mathcal E^{S'},\mathcal E^{M'},\Lambda'\}$. The SPAM noise channels are transformed as
\begin{equation}\label{eq:SPAMupdate}
\mathcal E^{S'} = \mathcal M\mathcal E^S,\quad \mathcal E^{M'} = \mathcal E^M\mathcal M^{-1},
\end{equation}
both of which are still Pauli diagnoal maps. Thanks to our Assumption 4, as long as $\eta$ is sufficiently close to $1$, they can be shown to be Pauli channels.
Next, the single-qubit unitary gates are transformed as
\begin{equation}
\mathcal M \left(\bigotimes_{j=1}^n \mathcal U_j \right)\mathcal M^{-1}
=\mathcal D_i\mathcal U_i\mathcal D_i^\dagger \otimes \bigotimes_{j\ne i}\mathcal U_j
= \bigotimes_j \mathcal U_j,
\end{equation}
since the single-qubit deplorizing channel commutes with any single-qubit unitary. This implies the single-qubit unitary gates are still noiseless.
Finally, consider an arbitrary $n$-qubit Clifford gate ${\mathcal T}$. We show that the transformed noisy gate takes the form $\widetilde{\mathcal T}'=\widetilde{\mathcal T}\circ \Lambda_{\mathcal T}'$ where $\Lambda_{\mathcal T}'$ is still a Pauli channel, with the Pauli fidelities updated as follows.
\begin{equation}\label{eq:paulifidelityupdate}
{\lambda_b^{\mathcal T}}'=\begin{cases}
\eta \lambda_b^{\mathcal T}, & \text{if }\mathrm{pt}(P_b)_i=0\text{ and }\mathrm{pt}(\mathcal T(P_b))_i=1,\\
\eta^{-1}\lambda_b^{\mathcal T}, & \text{if }\mathrm{pt}(P_b)_i=1\text{ and }\mathrm{pt}(\mathcal T(P_b))_i=0,\\
\lambda_b^{\mathcal T}, &\text{if }\mathrm{pt}(P_b)_i=\mathrm{pt}(\mathcal T(P_b))_i.
\end{cases}
\end{equation}
We give a proof for the first case. Note that
\begin{equation}\label{eq:gate_noise_trans}
\begin{aligned}
\mathcal M\circ \widetilde{\mathcal T}\circ\mathcal M^{-1}
&= \mathcal D_{i}\circ \widetilde{\mathcal T}\circ \mathcal D_{i}^{-1}\\
&= \mathcal D_{i}\circ {\mathcal T}\circ\Lambda_{\mathcal T}\circ \mathcal D_{i}^{-1}\\
&= {\mathcal T}\circ({\mathcal T}^{-1}\circ\mathcal D_{i}\circ {\mathcal T}\circ\Lambda_{\mathcal T}\circ \mathcal D_{i}^{-1}) \\
&\mkern-1.2mu=\nobreak\mathrel{\mathop:}\mathcal T\circ \Lambda'_{\mathcal T},
\end{aligned}
\end{equation}
where we use $\mathcal D_i$ as a shorthand for $\mathcal D_i\otimes\mathcal I_{[n]\backslash i}$.
The transformed noise channel can be written as
\begin{equation}\label{eq:noisechannelupdate}
\Lambda'_{\mathcal T}={\mathcal T}^{-1}\circ\mathcal D_{i}\circ {\mathcal T}\circ\Lambda_{\mathcal T}\circ \mathcal D_{i}^{-1}.
\end{equation}
Let us calculate its action on arbitrary $P_b$.
\begin{equation}
\begin{aligned}
\Lambda_{\mathcal T}'
({P_{b}}) &= (\mathcal T^{-1}\circ \mathcal D_{i}\circ \mathcal T\circ\Lambda_{\mathcal T}\circ \mathcal D_{i}^{-1})(P_b)\\
&= (\eta^{-1})^{\mathrm{pt}(P_b)_i} (\mathcal T^{-1}\circ \mathcal D_{i}\circ \mathcal T\circ\Lambda_{\mathcal T})(P_b)\\
&= \lambda_b^{\mathcal T}(\eta^{-1})^{\mathrm{pt}(P_b)_i} (\mathcal T^{-1}\circ \mathcal D_{i}\circ \mathcal T)(P_b)\\
&=\eta^{\mathrm{pt}(\mathcal T(P_b))_i}\lambda_b^{\mathcal T}(\eta^{-1})^{\mathrm{pt}(P_b)_i} ~P_b.
\end{aligned}
\end{equation}
Thus, $\Lambda_{\mathcal T}'$ is indeed a Pauli diagonal map with Pauli fidelities given by Eq.~\eqref{eq:paulifidelityupdate}. The fact that $\Lambda_{\mathcal T}'$ is guaranteed to be a CPTP map by choosing appropriate $\eta$ will be verified later.
Specifically, if we take $\mathcal T$ to be the Clifford gate $\mathcal G$ that we are interested in, we have $\lambda_a^{\mathcal G'} = \eta\lambda_a^{\mathcal G}$ or $\lambda_a^{\mathcal G'} = \eta^{-1}\lambda_a^{\mathcal G}$. In either case, $\lambda_a^{\mathcal G'} \ne \lambda_a^{\mathcal G}$. This means the two indistinguishable noise model $\mathcal N$, $\mathcal N'$ indeed assign different values to $\lambda_a^{\mathcal G}$.
We now verify that $\mathcal N'$ is indeed a physical noise model and satisfies Assumptions 1-4. We have already shown that single-qubit unitary gates remain noiseless and that all gate noise and SPAM noise are described by Pauli diagonal maps. The only thing left is to make sure all these Pauli diagonal maps are CPTP and satisfy the positivity constraints in Assumption 4.
According to Eq.~\eqref{eq:SPAMupdate} and \eqref{eq:paulifidelityupdate}, any Pauli fidelity $\lambda_b$ of either SPAM noise or gate noise is transformed to one of the following $\lambda_b'\in\{\lambda_b,\eta\lambda_b,\eta^{-1}\lambda_b\}$, so $\lambda_b>0$ implies $\lambda_b'>0$. On the other hand, any transformed Pauli error rate can be bounded by
\begin{equation}\label{eq:positivity}
\begin{aligned}
p_c' &= \frac{1}{4^n}\sum_{b\in{\sf P}^n}(-1)^\expval{b,c}\lambda_b'\\
&\ge \frac{1}{4^n}\sum_{b\in{\sf P}^n}\left((-1)^\expval{b,c}\lambda_b - (\eta^{-1}-1)\lambda_b\right)\\
&\ge p_c - (\eta^{-1}-1).
\end{aligned}
\end{equation}
To ensure every $p'_c>0$, we can choose $1>\eta>(p_{\min}+1)^{-1}$ with $p_{\min}$ being the minimum Pauli error rate among all Pauli channels of both SPAM and gate noise, which is possible since $p_{\min}>0$ by Assumption 4. This means each transformed Pauli diagonal maps are completely positive (CP). To see they are also trace-preserving (TP), just notice from Eq.~\eqref{eq:SPAMupdate},~\eqref{eq:paulifidelityupdate} that $\lambda_{\bm 0}'=\lambda_{\bm 0} = 1$ always holds.
Now we conclude that $\mathcal N'$ is indeed a physical noise model satisfying all the assumptions.
Combining with the reasoning in the last paragraph, we see $\lambda_a^{\mathcal G}$ is unlearnable.
This completes our proof.
\end{proof}
| 2,832 | 39,025 |
en
|
train
|
0.4975.9
|
\subsection{Characterization of learnable space via algebraic graph theory}\label{sec:space}
We have characterized the learnability of individual Pauli fidelities associated with any Clifford gates in Theorem~\ref{th:nogo}.
Here, we want to understand the learnablity for a general function of the gate noise.
We first show that, in our setting, any measurement outcome probability in experiment can be expressed as a polynomial of Pauli fidelities of gate and SPAM noise, and each term in the polynomial can be learned via a CB experiment (see Sec.~\ref{sec:justification} for details).
Therefore, it suffices to study the monomials, \textit{i.e.}, products of Pauli fidelities.
For each Pauli fidelity $\lambda_a^{\mathcal G}$, we define the \emph{logarithmic Pauli fidelity} as $l_a^{\mathcal G}\mathrel{\mathop:}\nobreak\mkern-1.2mu= \log \lambda_a^{\mathcal G}$ ($\lambda_a^{\mathcal G}>0$ by Assumption 4).
It then suffices to study the learnability of linear functions of the logarithmic Pauli fidelities.
An alternative reason to only study this class of function is that, under a weak noise assumption, we have $l_a\to 0$, so we can express any function of the noise model as a linear function of $l_a$ under a first order approximation. Note that similar approaches have been explored in the literature~\cite{nielsen2022first,flammia2021averaged}.
Since we are working with Assumption 1-4 which takes all crosstalk into account, we treat the noise channel for each gate in $\mathfrak G$ as $n$-qubit.
The number of independent Pauli fidelities we are interested in is thus
\begin{equation}
|\Lambda| = |\mathfrak G|\cdot 4^n.
\end{equation}
Denote the space of all (real-valued) linear function of logarithmic Pauli fidelities as $F$, then we have $F\cong \mathbfb R^{|\Lambda|}$. A function $f\in F$ uniquely corresponds to a vector $\bm v\in \mathbfb R^{|\Lambda|}$ by $f(\bm l) = \bm v\cdot \bm l = \sum_{a,\mathcal G}v_{a,\mathcal G}l_a^{\mathcal G}$. We will use the vector to refer to the linear function when there is no ambiguity.
Denote the set of all learnable function in $F$ as $F_L$ (in the sense of Def.~\ref{de:learnability}). As shown in the following lemma, $F_L$ forms a linear subspace in $F$, so we call $F_L$ the \emph{learnable space}.
\begin{lemma}\label{le:learnable_is_space}
$F_L$ is a linear subspace of $F$.
\end{lemma}
\begin{proof}
Given $\bm v_1,\bm v_2 \in F_L$, consider the learnability of $\bm v_1+\bm v_2$. For any noise models $\mathcal N_1,~\mathcal N_2$,
\begin{equation}
\begin{aligned}
(\bm v_1+\bm v_2)\cdot\bm l_{\mathcal N_1} \ne (\bm v_1+\bm v_2)\cdot\bm l_{\mathcal N_2} &\implies \bm v_1\cdot\bm l_{\mathcal N_1} \ne \bm v_1\cdot\bm l_{\mathcal N_2} ~\text{or}~\bm v_2\cdot\bm l_{\mathcal N_1} \ne \bm v_2\cdot\bm l_{\mathcal N_2}\\
&\implies \mathcal N_1, \mathcal N_2~\text{are distinguishable}.
\end{aligned}
\end{equation}
Thus $\bm v_1+\bm v_2\in F_L$.
We also have $\bm v\in L \implies k\bm v\in F_L$ for all $k\in\mathbfb R$. Therefore, $F_L$ forms a vector space in $\mathbfb R^{|\Lambda|}$.
\end{proof}
| 1,013 | 39,025 |
en
|
train
|
0.4975.10
|
Our goal is to give a precise characterization of the learnable space $F_L$.
For example, we may want to know the dimension of $F_L$, which represents the learnable degrees of freedom for the noise.
This is also the maximum number of linearly-independent equations about the logarithmic Pauli fidelities we can expect to extract from experiments.
Conversely, the unlearnable degrees of freedom roughly correspond to the number of independent gauge transformations.
We summarize these definitions as follows.
\begin{definition}
Given a Clifford gate set $\mathfrak G$, the learnable degrees of freedom $\mathrm{LDF}(\mathfrak G)$ and unlearnable degrees of freedom $\mathrm{UDF}(\mathfrak G)$ are defined as, respectively,
\begin{equation}
\mathrm{LDF}(\mathfrak G) \mathrel{\mathop:}\nobreak\mkern-1.2mu= \mathrm{dim}(F_L),\quad
\mathrm{UDF}(\mathfrak G) \mathrel{\mathop:}\nobreak\mkern-1.2mu= |\Lambda| - \mathrm{dim}(F_L).
\end{equation}
\end{definition}
\noindent Our approach is to relate $F_L$ to certain properties of a graph defined as follows.
\begin{definition}[Pattern transfer graph]
The pattern transfer graph associated with a Clifford gate set $\mathfrak G$ is a directed graph $G=(V,E)$ constructed as follows:
\begin{itemize}
\item $V(G) = \{0,1\}^n$.
\item $E(G) = \{e_{a,\mathcal G} \mathrel{\mathop:}\nobreak\mkern-1.2mu= (\mathrm{pt}(P_a),~\mathrm{pt}(\mathcal G(P_a)) ~|~\forall~ P_a\in{\sf P}^n,~\mathcal G\in\mathfrak{G} \}$.
\end{itemize}
\end{definition}
The $2^n$ vertices each corresponds to a possible Pauli pattern.
The $|E| = |\Lambda| = |\mathfrak G|\cdot 4^n$ edges each corresponds to a Pauli operator and a Clifford gate, describing how the Clifford gate evolves the pattern of the Pauli operator. One can also think each edge corresponds to a unique Pauli fidelity ($e_{a,\mathcal G}\leftrightarrow \lambda_{a}^{\mathcal G}$).
The rationale for only tracking the Pauli pattern is that we assume the ability to implement noiseless single-qubit unitaries, which makes the actual single-qubit Pauli operators unimportant. Fig.~2 of main text shows the pattern transfer graphs for a CNOT gate, a SWAP gate, and a gate set of CNOT and SWAP, respectively.
Next, we give some definitions from graph theory (see~\cite{gleiss2003circuit,bollobas1998modern}). A \emph{chain} is an alternating sequences of vertices and edges $z=(v_0,e_1,v_1,e_2,v_2,...,v_{q-1},e_q,v_q)$ such that each edge satisfies $e_k=(v_{k-1},v_k)$ or $e_k=(v_{k},v_{k-1})$.
A chain is \emph{simple} if it does not contain the same edge twice.
A closed chain (\textit{i.e.}, $v_0=v_q$) is called a \emph{cycle}. If an edge $e_k$ in a chain satisfies $e_k = (v_{k-1},v_k)$, it is called an \emph{oriented edge}. A chain consists solely of oriented edges is called a \emph{path}. A closed path is called a \emph{oriented cycle} or a \emph{circuit}.
A graph is called \emph{strongly connected} if there is a path from every vertex to every other vertex. A graph is called \emph{weakly connected} if there is a chain from every vertex to every other vertex. The number of (strongly or weakly) \emph{connected components} is the minimum number of partitions of the vertex set $V=V_1\cup\cdots\cup V_c$ such that each subgraph generated by a vertex partition is (strongly or weakly) connected.
We can equip a graph with vector spaces.
Following the notations of~\cite[Sec. II.3]{bollobas1998modern},
the \emph{edge space} $C_1(G)$ of a directed graph $G$ is the vector space of all linear functions from the edges $E(G)$ to $\mathbfb R$.
By construction, $C_1(G)\cong \mathbfb R^{|\Lambda|} \cong F$.
Every linear function of the logarithmic Pauli fidelities naturally corresponds to a linear function of the edges according to the label of the edges ($l_{a}^{\mathcal G} \leftrightarrow e_{a,\mathcal G}$).
Again, we use vectors in $\mathbfb R^{|\Lambda|}$ to refer to elements of $C_1(G)$. The inner product on $C_1(G)$ is defined as the standard inner product on $\mathbfb R^{|\Lambda|}$.
There are two subspaces of $C_1(G)$ that is of special interest.
For a simple cycle
$z$ in $G$, we assign a vector $\bm v_z\in C_1(G)$ as follows
\begin{equation}
\bm v_z(e) = \left\{
\begin{aligned}
+1,\quad& e\in z,~\text{$e$ is oriented.}\\
-1,\quad& e\in z,~\text{$e$ is not oriented.}\\
0,\quad& e\notin z.
\end{aligned}
\right.
\end{equation}
The \emph{cycle space} $Z(G)$ is the linear subspace of $C_1(G)$ spanned by all cycles $\bm v_z$ in $G$.
Given a partition of vertices $V=V_1\cup V_2$ such that there is at least one edge between $V_1$ and $V_2$, a \emph{cut} is the set of all edges $e = (u,v)$ such that one of $u,v$ belongs to $V_1$ and the other belongs to $V_2$. For each cut $p$ we assign an vector $\bm v_p\in C_1(G)$ as follows
\begin{equation}\label{eq:cut}
\bm v_p(e) = \left\{
\begin{aligned}
+1,\quad& e\in p,~\text{$e$ goes from $V_1$ to $V_2$.}\\
-1,\quad& e\in p,~\text{$e$ goes from $V_2$ to $V_1$.}\\
0,\quad& e\notin p.
\end{aligned}
\right.
\end{equation}
The \emph{cut space} $U(G)$ is the linear subspace of $C_1(G)$ spanned by all cuts $\bm v_p$ in $G$.
Note that different partition of vertices may result in the same cut vector if $G$ is unconnected.
\begin{lemma}\cite[Sec. II.3, Theorem~1]{bollobas1998modern}\label{le:complement}
The edge space $C_1(G)$ is the orthogonal direct sum of the cycle space $Z(G)$ and the cut space $U(G)$, whose dimensions are given by
\begin{equation}
\mathrm{dim}(Z(G)) = |E|-|V|+c(G),\quad
\mathrm{dim}(U(G)) = |V|-c(G),
\end{equation}
where $c(G)$ is the number of weakly connected components of $G$.
\end{lemma}
In some cases, we are more interested in circuits (oriented cycles) instead of general cycles. The following lemma gives a sufficient condition when the cycle spaces have a circuit basis, \textit{i.e.} the cycle space is spanned by oriented cycles.
\begin{lemma}\cite[Theorem~7]{gleiss2003circuit}\label{le:circuit}
A directed graph has a circuit basis if it is strongly connected, or it is a union of strongly connected subgraphs.
\end{lemma}
With all the graph theoretical tools introduced above, we are ready to present the main result of this section.
\begin{theorem}\label{th:space}
Under the Assumptions 1-4.
For any $\mathfrak G$,
$F_L \cong Z(G)$.
Explicitly, a linear function $f_{\bm v}(\bm l) = \bm v\cdot\bm l$ is learnable if and only if $\bm v$ belongs to the cycle space $Z(G)$.
\end{theorem}
We give the proof at the end of this section.
The proof involves two parts.
The first is to show that every cycle is learnable using a variant of cycle benchmarking~\cite{erhard2019characterizing}, thus the cycle space belongs to the learnable space. The second part is to show that every cut induces a gauge transformation~\cite{nielsen2021gate}, and thus the learnable space must be orthogonal to the cut space, which implies it lies in the cycle space.
We remark that Theorem~\ref{th:nogo} can be viewed as a corollary of Theorem~\ref{th:space}.
This is because an individual Pauli fidelity $\lambda_a^{\mathcal G}$ whose Pauli pattern changes (\textit{i.e.}, $\mathrm{pt}(P_a)\ne\mathrm{pt}(\mathcal G(P_a))$) corresponds to an simple edge in the pattern transfer graph, which does not belong to the cycle space and is thus unlearnable. On the other hand, a Pauli fidelity without Pauli pattern change corresponds to a self-loop in the pattern transfer graph, which belongs to the cycle space by definition, and is thus learnable.
Combing Theorem~\ref{th:space} with Lemma~\ref{le:complement} leads to the following.
\begin{corollary}\label{co:udf}
The learnable and unlearnable degrees of freedom associated with $\mathfrak G$ are given by
\begin{equation}
\mathrm{LDF}(\mathfrak G) = |\mathfrak G|\cdot 4^n-2^n+c(\mathfrak G),\quad
\mathrm{UDF}(\mathfrak G) = 2^n-c(\mathfrak G),
\end{equation}
where $c(\mathfrak G)$ is the number of connected components of the pattern transfer graph associated with $\mathfrak G$.
\end{corollary}
Note that the unlearnable degrees of freedom always constitute an exponentially small portion, though they can grow exponentially.
Examples of some gate sets are given in Table~\ref{tab:UDF} and Figure~\ref{fig:more_gate_sets}. One can notice some interesting properties. The UDF of CNOT and SWAP equals to $2$ and $1$, respectively, but a gate set containing both has $\mathrm{UDF}=2$. This means UDF is not ``additive''. The interdependence between different gates can give us more learnable degrees of freedom.
However, Corollary~\ref{co:udf} implies that the UDF of a gate set cannot be smaller than the UDF of any of its subset.
This is because adding new gates can only decrease the number of connected components $c(\mathfrak G)$ of the pattern transfer graph.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Number of qubits $n$ & Gate set $\mathfrak G$ & Number of parameters $|\Lambda|=4^n|\mathfrak G|$ & $\mathrm{UDF}(\mathfrak G)$\\
\hline
2 & CNOT & 16 & 2\\
2 & SWAP & 16 & 1\\
2 & \{CNOT, SWAP\} & 32 & 2\\
3 & $\mathrm{\{CNOT_{12},CNOT_{23},CNOT_{31}\}}$ & 192 & 6\\
3 & $\mathrm{CIRC_3}$ & 64 & 4\\
\hline
\end{tabular}
\caption{The unlearnable degrees of freedom of some gate sets. Here $\mathrm{CIRC_3}$ is the circular permutation on $3$ qubits. UDF is calculated by applying Corollary~\ref{co:udf} to the corresponding pattern transfer graph in Fig.~2 of main text and Fig.~\ref{fig:more_gate_sets}.}
\label{tab:UDF}
\end{table}
\begin{figure}
\caption{Pattern transfer graphs for $\mathrm{\{CNOT,~SWAP\}
\label{fig:more_gate_sets}
\end{figure}
\begin{proof}[Proof of Theorem~\ref{th:space}]
The proof is divided into showing $Z(G)\subseteq F_L$ and $F_L\subseteq Z(G)$ (up to the natural isometry between $F$ and $C_1(G)$).
$Z(G)\subseteq F_L$: Roughly, this is equivalent to saying that all cycles are learnable.
We will first show that the pattern transfer graph always has a circuit basis, and then show that the linear function associated with each circuit can be learned using a variant of cycle benchmarking protocol~\cite{erhard2019characterizing}.
We begin by showing that the pattern transfer graph $G$ associated with a gate set $\mathfrak G$ is a union of strongly connected subgraphs. This is equivalent to saying that for any vertices $u,v\in V(G)$, if there is a path from $u$ to $v$, there must be a path from $v$ to $u$.
It suffices to show that for each edge $e=(u,v)$ there is a path from $v$ to $u$, since any path is just concatenation of edges.
By definition, the existence of $e=(u,v)$ implies there exists $P\in {\sf P}^n$ and $\mathcal G\in\mathfrak G$ such that $\mathrm{pt}(P) = u$ and $\mathrm{pt}(Q) = v$ where $Q\mathrel{\mathop:}\nobreak\mkern-1.2mu= \mathcal G(P)$. Since a Clifford gate is a permutation on the Pauli group, there must exist some integer $d>0$ such that $\mathcal G^d = \mathcal I$, thus $P = \mathcal G^{d-1}(Q)$, which induces the following path from $v$ to $u$:
\begin{equation*}
(
\mathrm{pt}(Q),~e_{Q,\mathcal G},~\mathrm{pt}(\mathcal G(Q)),~e_{\mathcal G(Q),\mathcal G},~ \mathrm{pt}(\mathcal G^2(Q)),~\cdots,~ \mathrm{pt}(\mathcal G^{d-2}(Q)),~ e_{\mathcal G^{d-1}(Q),\mathcal G},~ \mathrm{pt}(\mathcal G^{d-1}(Q))
).
\end{equation*}
One can verify this is a path according to the definition of $G$. This shows that $G$ is indeed a union of strongly connected subgraphs. According to Lemma~\ref{le:circuit}, $G$ has a circuit basis that spans the cycle space $Z(G)$.
| 3,690 | 39,025 |
en
|
train
|
0.4975.11
|
We remark that Theorem~\ref{th:nogo} can be viewed as a corollary of Theorem~\ref{th:space}.
This is because an individual Pauli fidelity $\lambda_a^{\mathcal G}$ whose Pauli pattern changes (\textit{i.e.}, $\mathrm{pt}(P_a)\ne\mathrm{pt}(\mathcal G(P_a))$) corresponds to an simple edge in the pattern transfer graph, which does not belong to the cycle space and is thus unlearnable. On the other hand, a Pauli fidelity without Pauli pattern change corresponds to a self-loop in the pattern transfer graph, which belongs to the cycle space by definition, and is thus learnable.
Combing Theorem~\ref{th:space} with Lemma~\ref{le:complement} leads to the following.
\begin{corollary}\label{co:udf}
The learnable and unlearnable degrees of freedom associated with $\mathfrak G$ are given by
\begin{equation}
\mathrm{LDF}(\mathfrak G) = |\mathfrak G|\cdot 4^n-2^n+c(\mathfrak G),\quad
\mathrm{UDF}(\mathfrak G) = 2^n-c(\mathfrak G),
\end{equation}
where $c(\mathfrak G)$ is the number of connected components of the pattern transfer graph associated with $\mathfrak G$.
\end{corollary}
Note that the unlearnable degrees of freedom always constitute an exponentially small portion, though they can grow exponentially.
Examples of some gate sets are given in Table~\ref{tab:UDF} and Figure~\ref{fig:more_gate_sets}. One can notice some interesting properties. The UDF of CNOT and SWAP equals to $2$ and $1$, respectively, but a gate set containing both has $\mathrm{UDF}=2$. This means UDF is not ``additive''. The interdependence between different gates can give us more learnable degrees of freedom.
However, Corollary~\ref{co:udf} implies that the UDF of a gate set cannot be smaller than the UDF of any of its subset.
This is because adding new gates can only decrease the number of connected components $c(\mathfrak G)$ of the pattern transfer graph.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Number of qubits $n$ & Gate set $\mathfrak G$ & Number of parameters $|\Lambda|=4^n|\mathfrak G|$ & $\mathrm{UDF}(\mathfrak G)$\\
\hline
2 & CNOT & 16 & 2\\
2 & SWAP & 16 & 1\\
2 & \{CNOT, SWAP\} & 32 & 2\\
3 & $\mathrm{\{CNOT_{12},CNOT_{23},CNOT_{31}\}}$ & 192 & 6\\
3 & $\mathrm{CIRC_3}$ & 64 & 4\\
\hline
\end{tabular}
\caption{The unlearnable degrees of freedom of some gate sets. Here $\mathrm{CIRC_3}$ is the circular permutation on $3$ qubits. UDF is calculated by applying Corollary~\ref{co:udf} to the corresponding pattern transfer graph in Fig.~2 of main text and Fig.~\ref{fig:more_gate_sets}.}
\label{tab:UDF}
\end{table}
\begin{figure}
\caption{Pattern transfer graphs for $\mathrm{\{CNOT,~SWAP\}
\label{fig:more_gate_sets}
\end{figure}
\begin{proof}[Proof of Theorem~\ref{th:space}]
The proof is divided into showing $Z(G)\subseteq F_L$ and $F_L\subseteq Z(G)$ (up to the natural isometry between $F$ and $C_1(G)$).
$Z(G)\subseteq F_L$: Roughly, this is equivalent to saying that all cycles are learnable.
We will first show that the pattern transfer graph always has a circuit basis, and then show that the linear function associated with each circuit can be learned using a variant of cycle benchmarking protocol~\cite{erhard2019characterizing}.
We begin by showing that the pattern transfer graph $G$ associated with a gate set $\mathfrak G$ is a union of strongly connected subgraphs. This is equivalent to saying that for any vertices $u,v\in V(G)$, if there is a path from $u$ to $v$, there must be a path from $v$ to $u$.
It suffices to show that for each edge $e=(u,v)$ there is a path from $v$ to $u$, since any path is just concatenation of edges.
By definition, the existence of $e=(u,v)$ implies there exists $P\in {\sf P}^n$ and $\mathcal G\in\mathfrak G$ such that $\mathrm{pt}(P) = u$ and $\mathrm{pt}(Q) = v$ where $Q\mathrel{\mathop:}\nobreak\mkern-1.2mu= \mathcal G(P)$. Since a Clifford gate is a permutation on the Pauli group, there must exist some integer $d>0$ such that $\mathcal G^d = \mathcal I$, thus $P = \mathcal G^{d-1}(Q)$, which induces the following path from $v$ to $u$:
\begin{equation*}
(
\mathrm{pt}(Q),~e_{Q,\mathcal G},~\mathrm{pt}(\mathcal G(Q)),~e_{\mathcal G(Q),\mathcal G},~ \mathrm{pt}(\mathcal G^2(Q)),~\cdots,~ \mathrm{pt}(\mathcal G^{d-2}(Q)),~ e_{\mathcal G^{d-1}(Q),\mathcal G},~ \mathrm{pt}(\mathcal G^{d-1}(Q))
).
\end{equation*}
One can verify this is a path according to the definition of $G$. This shows that $G$ is indeed a union of strongly connected subgraphs. According to Lemma~\ref{le:circuit}, $G$ has a circuit basis that spans the cycle space $Z(G)$.
Now we show that every circuit in $G$ represents a learnable function.
Consider an arbitrary circuit $z = (v_0,e_1,v_1,e_2,v_2,...,v_{q-1},e_q,v_q\equiv v_0)$. For each $k=1...q$, the edge $e_k$ corresponds to a Pauli operator $P_k\in{\sf P}^n$ and a Clifford gate $\mathcal G_k \in \mathfrak G$ such that $\mathrm{pt}(P_k) = v_{k-1}$ and $\mathrm{pt}(Q_k) = v_{k}$ where $Q_k\mathrel{\mathop:}\nobreak\mkern-1.2mu= \mathcal G_k(P_k)$.
On the other hand, since $\mathrm{pt}(Q_k)=\mathrm{pt}(P_{k+1})$, there exists a product of single qubit unitaries $\mathcal U_k$ such that $P_{k+1} = \mathcal U_k(Q_k)$ for $k=1...q$ (where we define $P_{q+1}\mathrel{\mathop:}\nobreak\mkern-1.2mu= P_1$, as $\mathrm{pt}(Q_q)=\mathrm{pt}(P_1)$ by assumptions).
Consider the following gate sequence,
\begin{equation}
\mathcal C \mathrel{\mathop:}\nobreak\mkern-1.2mu= \mathcal U_q\mathcal G_q\mathcal U_{q-1}\mathcal G_{q-1}\cdots\mathcal U_1\mathcal G_1
\end{equation}
One can see that $\mathcal C(P_1) = P_1$. Now we design the following experiments parameterized by a positive integer $m$,
\begin{itemize}
\item Initial state: $\rho_0 = (I+P_1)/2^n$,
\item POVM measurement: $E_{\pm 1} = (I\pm P_1)/{2}$,
\item Circuit: $\mathcal C^m = \left(\mathcal U_q\mathcal G_q\mathcal U_{q-1}\mathcal G_{q-1}\cdots\mathcal U_1\mathcal G_1\right)^m$.
\end{itemize}
Consider the outcome distribution generated by running these experiments within a noise model $\mathcal N$.
\begin{equation}
\begin{aligned}
p^{(m)}_{\pm 1}(\mathcal N) &= \Tr\left( \widetilde{E}_{\pm 1} \widetilde{\mathcal C}^m (\widetilde{\rho}_0)\right) \\
&= \Tr \left( \frac{I\pm P_1}{2} \cdot\left( \mathcal E^M \circ \left(\mathcal U_q\widetilde{\mathcal G}_q
\cdots\mathcal U_1\widetilde{\mathcal G}_1\right)^m\circ\mathcal E^S \right)
\left(\frac{I+P_1}{2^n}\right) \right)\\
&= \Tr\left(\frac{I\pm P_1}{2} \cdot\frac{I+\lambda^M_{P_1}
\left(\lambda^{\mathcal G_q}_{P_q}\cdots\lambda^{\mathcal G_2}_{P_2}\lambda^{\mathcal G_1}_{P_1}\right)^m
\lambda^S_{P_1}P_1}{2^n} \right)\\
&= \frac{1\pm\lambda^M_{P_1}
\left(\lambda^{\mathcal G_q}_{P_q}\cdots\lambda^{\mathcal G_2}_{P_2}\lambda^{\mathcal G_1}_{P_1}\right)^m
\lambda^S_{P_1}}{2}.
\end{aligned}
\end{equation}
The expectation value is
\begin{equation}
\mathbfb E^{(m)}(\mathcal N) = \lambda^M_{P_1}
\left(\lambda^{\mathcal G_q}_{P_q}\cdots\lambda^{\mathcal G_2}_{P_2}\lambda^{\mathcal G_1}_{P_1}\right)^m
\lambda^S_{P_1}.
\end{equation}
If we take the ratio of expectation values of two experiments with consecutive $m$, we obtain (recall that all these Pauli fidelities are strictly positive by Assumption 4)
\begin{equation}
\mathbfb E^{m+1}(\mathcal N)/\mathbfb E^{m}(\mathcal N) = \lambda^{\mathcal G_q}_{P_q}\cdots\lambda^{\mathcal G_2}_{P_2}\lambda^{\mathcal G_1}_{P_1}.
\end{equation}
This implies that if two noise models have different values for the product of Pauli fidelities $\lambda^{\mathcal G_q}_{P_q}\cdots\lambda^{\mathcal G_2}_{P_2}\lambda^{\mathcal G_1}_{P_1}$, the above experiments would be able to distinguish between them. Therefore, $\lambda^{\mathcal G_q}_{P_q}\cdots\lambda^{\mathcal G_2}_{P_2}\lambda^{\mathcal G_1}_{P_1}$ is a learnable function.
By taking the logarithm of this expression, we see that $f(\bm l)\mathrel{\mathop:}\nobreak\mkern-1.2mu=\sum_{k=1}^q l_{P_q}^{\mathcal G_q}$ is a learnable linear function of the logarithmic Pauli fidelities.
Notice that $f(\bm l)$ exactly corresponds to the circuit of $z$ according to the natural isometry between $F$ and $C_1(G)$. This tells us that every circuit in $G$ indeed corresponds to a learnable linear function.
Combining with the fact that the circuits in $G$ span the cycle space $Z(G)$, and the fact that learnable functions are closed under linear combination (Lemma~\ref{le:learnable_is_space}), we conclude that $Z(G)\subseteq F_L$.
$F_L\subseteq Z(G)$:
For this part, we just need to show that $F_L$ is orthogonal to the cut space $U(G)$, which is the orthogonal complement of the cycle space $Z(G)$. To show this, we will construct a gauge transformation for each element of $U(G)$. The definition of learnability then requires a learnable linear function to be orthogonal to all gauge transformations, thus orthogonal to the entire cut space.
Consider a cut $V = V_1 \cup V_2$ (such that there is at least one edge between $V_1$ and $V_2$). We define the \emph{gauge transform map} $\mathcal M$ as the following Pauli diagonal map,
\begin{equation}
\mathcal M(P) \mathrel{\mathop:}\nobreak\mkern-1.2mu= \left\{
\begin{aligned}
\eta P, \quad& \text{if}~\mathrm{pt}(P)\in V_1,\\
P, \quad& \text{if}~\mathrm{pt}(P)\in V_2,
\end{aligned}
\right.\quad\forall P\in{\sf P}^n,
\end{equation}
for a positive parameter $\eta\ne 1$. The gauge transformation induced by $\mathcal M$ is defined in the same way as Eq.~\eqref{eq:gauge_trans}.
We will show that there exists two noise models satisfying all the assumptions that are related by a gauge transformation (thus indistinguishable) but yields different values for the function corresponding to the cut $V_1\cup V_2$.
Starting with a noise model $\mathcal N = \{\mathcal E^S,\mathcal E^M, \Lambda\}$, we first calculate the gauge transformed noise model $\mathcal N'$.
The SPAM noise channels are transformed as
\begin{equation}\label{eq:spam_update_2}
\mathcal E^{S'} = \mathcal M\mathcal E^S,\quad \mathcal E^{M'} = \mathcal E^M\mathcal M^{-1},
\end{equation}
which are still Pauli diagonal maps.
Using exactly the same argument as in the proof of Theorem~\ref{th:nogo}, by choosing $\eta$ to be sufficiently close to $1$, these transformed maps are guaranteed to be CPTP and satisfy Assumption~4.
Secondly, the single-qubit unitaries are transformed as $\mathcal U' = \mathcal M\mathcal U\mathcal M^{-1}$. Calculate the following inner product for any $P,Q\in{\sf P}^n$,
\begin{equation}\label{eq:gauge_trans_2}
\begin{aligned}
\Tr(P\cdot\mathcal U'(Q))&=\Tr(\mathcal M^\dagger(P)\cdot \mathcal U(\mathcal M^{-1}(Q)) )\\
&= \eta^{\bm 1_{V_1}[\mathrm{pt}(P)]}(\eta^{-1})^{\bm 1_{V_1}[\mathrm{pt}(Q)]}\Tr(P\cdot\mathcal U(Q)).
\end{aligned}
\end{equation}
Here $\bm 1_{V_1}$ is the indicator function of $V_1$.
We see that $\Tr(P\cdot\mathcal U'(Q))=\Tr(P\cdot\mathcal U(Q))$ if $\mathrm{pt}(P)=\mathrm{pt}(Q)$.
A crucial observation is that a product of single-qubit unitaries can never change the pattern of the input Pauli. More precisely, $\mathcal U(Q)$ is a linear combination of Pauli operators with the same pattern as $Q$. Therefore, if $\mathrm{pt}(P)\ne\mathrm{pt}(Q)$, we would have $\Tr(P\cdot\mathcal U'(Q))=\Tr(P\cdot\mathcal U(Q))=0$.
Combining the two cases, we conclude $\mathcal U'=\mathcal U$, \textit{i.e.}, the single-qubit unitaries are still noiseless in $\mathcal N'$.
| 3,805 | 39,025 |
en
|
train
|
0.4975.12
|
$F_L\subseteq Z(G)$:
For this part, we just need to show that $F_L$ is orthogonal to the cut space $U(G)$, which is the orthogonal complement of the cycle space $Z(G)$. To show this, we will construct a gauge transformation for each element of $U(G)$. The definition of learnability then requires a learnable linear function to be orthogonal to all gauge transformations, thus orthogonal to the entire cut space.
Consider a cut $V = V_1 \cup V_2$ (such that there is at least one edge between $V_1$ and $V_2$). We define the \emph{gauge transform map} $\mathcal M$ as the following Pauli diagonal map,
\begin{equation}
\mathcal M(P) \mathrel{\mathop:}\nobreak\mkern-1.2mu= \left\{
\begin{aligned}
\eta P, \quad& \text{if}~\mathrm{pt}(P)\in V_1,\\
P, \quad& \text{if}~\mathrm{pt}(P)\in V_2,
\end{aligned}
\right.\quad\forall P\in{\sf P}^n,
\end{equation}
for a positive parameter $\eta\ne 1$. The gauge transformation induced by $\mathcal M$ is defined in the same way as Eq.~\eqref{eq:gauge_trans}.
We will show that there exists two noise models satisfying all the assumptions that are related by a gauge transformation (thus indistinguishable) but yields different values for the function corresponding to the cut $V_1\cup V_2$.
Starting with a noise model $\mathcal N = \{\mathcal E^S,\mathcal E^M, \Lambda\}$, we first calculate the gauge transformed noise model $\mathcal N'$.
The SPAM noise channels are transformed as
\begin{equation}\label{eq:spam_update_2}
\mathcal E^{S'} = \mathcal M\mathcal E^S,\quad \mathcal E^{M'} = \mathcal E^M\mathcal M^{-1},
\end{equation}
which are still Pauli diagonal maps.
Using exactly the same argument as in the proof of Theorem~\ref{th:nogo}, by choosing $\eta$ to be sufficiently close to $1$, these transformed maps are guaranteed to be CPTP and satisfy Assumption~4.
Secondly, the single-qubit unitaries are transformed as $\mathcal U' = \mathcal M\mathcal U\mathcal M^{-1}$. Calculate the following inner product for any $P,Q\in{\sf P}^n$,
\begin{equation}\label{eq:gauge_trans_2}
\begin{aligned}
\Tr(P\cdot\mathcal U'(Q))&=\Tr(\mathcal M^\dagger(P)\cdot \mathcal U(\mathcal M^{-1}(Q)) )\\
&= \eta^{\bm 1_{V_1}[\mathrm{pt}(P)]}(\eta^{-1})^{\bm 1_{V_1}[\mathrm{pt}(Q)]}\Tr(P\cdot\mathcal U(Q)).
\end{aligned}
\end{equation}
Here $\bm 1_{V_1}$ is the indicator function of $V_1$.
We see that $\Tr(P\cdot\mathcal U'(Q))=\Tr(P\cdot\mathcal U(Q))$ if $\mathrm{pt}(P)=\mathrm{pt}(Q)$.
A crucial observation is that a product of single-qubit unitaries can never change the pattern of the input Pauli. More precisely, $\mathcal U(Q)$ is a linear combination of Pauli operators with the same pattern as $Q$. Therefore, if $\mathrm{pt}(P)\ne\mathrm{pt}(Q)$, we would have $\Tr(P\cdot\mathcal U'(Q))=\Tr(P\cdot\mathcal U(Q))=0$.
Combining the two cases, we conclude $\mathcal U'=\mathcal U$, \textit{i.e.}, the single-qubit unitaries are still noiseless in $\mathcal N'$.
Finally, the noisy Clifford gates are transformed as
\begin{equation}
\begin{aligned}
\widetilde{\mathcal G}'&= \mathcal M{\mathcal G}\Lambda_{\mathcal G}\mathcal M^{-1}\\
&= \mathcal G\mathcal G^{-1}\mathcal M{\mathcal G}\Lambda_{\mathcal G}\mathcal M^{-1}\\
&\mkern-1.2mu=\nobreak\mathrel{\mathop:} \mathcal G\Lambda_{\mathcal G}'
\end{aligned}
\end{equation}
where the transformed noise channel $\Lambda_{\mathcal G}'\mathrel{\mathop:}\nobreak\mkern-1.2mu= \mathcal G^{-1}\mathcal M{\mathcal G}\Lambda_{\mathcal G}\mathcal M^{-1}$ is a Pauli diagonal map. We now calculate its Pauli eigenvalues. For $P\in{\sf P}^n$,
\begin{equation}\label{eq:f_update_2}
\begin{aligned}
\Lambda_{\mathcal G}'(P) &= \mathcal G^{-1}\mathcal M{\mathcal G}\Lambda_{\mathcal G}\mathcal M^{-1}(P)\\
&=\eta^{\bm 1_{V_1}[\mathrm{pt}(\mathcal G(P))]}(\eta^{-1})^{\bm 1_{V_1}[\mathrm{pt}(P)]}\lambda^{\mathcal G}_P~P\\
&=\left\{
\begin{aligned}
\eta \lambda_P^{\mathcal G},\quad& \mathrm{pt}(P)\in V_1,~\mathrm{pt}(\mathcal G(P))\in V_2.\\
\eta^{-1} \lambda_P^{\mathcal G},\quad& \mathrm{pt}(P)\in V_2,~\mathrm{pt}(\mathcal G(P))\in V_1.\,\\
\lambda_P^{\mathcal G},\quad& \text{otherwise}.\\
\end{aligned}
\right.
\end{aligned}
\end{equation}
Again, Assumption 4 guarantees that $\Lambda_{\mathcal G}'$ is a CPTP map satisfying all of our noise assumptions as long as $\eta$ is sufficiently close to $1$. We omit the argument here as it is the same as in the previous proof.
Define $t_p \mathrel{\mathop:}\nobreak\mkern-1.2mu= \log \eta$ where $p$ denotes the cut $V_1\cup V_2$. The above gauge transformation of the log Pauli fidelity can be written as
\begin{equation}
\bm l' = \bm l + t_p \bm v_p
\end{equation}
where $\bm v_p$ is the cut vector of $V = V_1\cup V_2$ as defined in Eq.~\eqref{eq:cut}.
We have just defined a gauge transformation $\mathcal M_p$ for an arbitrary cut $p$.
Fix a basis of the cut space $B$ (where vectors in $B$ has the form in Eq.~\eqref{eq:cut}). For a generic element of the cut space $\bm v\in U(G)$, we can decompose it as $\bm v = \sum_{p\in B} t_p \bm v_p$ ($t_p\in\mathbb{R}$). We define the gauge transformation $\mathcal M_{\bm v}$ associated with $\bm v$ as a consecutive application of the gauge transformations $\{\mathcal M_p\}$ for each $p\in B$, each with parameter $t_p$. Here we assume that each $|t_p|$ is sufficiently small, as otherwise we can rescale the vector. This implies that $\mathcal M_{\bm v}$ is a valid gauge transformation.
The effect of such a transformation is
\begin{equation}
\bm l' = \bm l + \bm v.
\end{equation}
Now, Definition~\ref{de:learnability} implies that a learnable function $\bm f$ must remain unchanged under gauge transformations (as they result in indistinguishable noise models), which means that $\bm f\cdot \bm l' = \bm f\cdot \bm l$. Thus, for all $\bm f\in F_L$, and all $\bm v \in U(G)$, we must have
\begin{equation}
\bm f\cdot \bm v = \bm f\cdot \bm l' - \bm f\cdot \bm l = 0.
\end{equation}
That is, $F_L$ must be orthogonal to the cut space $U(G)$. According to Lemma~\ref{le:complement}, $Z(G)$ is the orthogonal complement of $U(G)$, so we conclude that $F_L\subseteq Z(G)$. This completes the second part of our proof.
\end{proof}
| 2,053 | 39,025 |
en
|
train
|
0.4975.13
|
\subsection{Learnability under no-crosstalk assumption}\label{sec:no_crosstalk}
As we commented before, the way we define the gate noise captures a general form of crosstalk~\cite{sarovar2020detecting}. One may ask, if we further make a favorable assumption that gate noise has no crosstalk, would this make the learning of noise much easier.
To consider this rigorously, we introduce the following optional assumption. See Fig.~\ref{fig:crosstalk} for an illustration.
\begin{itemize}
\item \textbf{Assumption 5} (No crosstalk.) For any $\mathcal G\in\mathfrak G$ that acts non-trivially only on a $k$-qubit subspace, the associated Pauli noise channel also acts non-trivially only on that subspace. In other words, if $\mathcal G = \mathcal G'\otimes \mathcal I$, we have $\widetilde{\mathcal G} = \left( \mathcal G'\circ\Lambda_{\mathcal G} \right)\otimes \mathcal I$ where $\Lambda_{\mathcal G}$ is an $k$-qubit Pauli channel
depending only on $\mathcal G$ and the (ordered) subset of qubits on which $\mathcal G$ acts.
\end{itemize}
\begin{figure}
\caption{Illustration of the crosstalk model. (a) A $4$-qubit circuit consists of three ideal CNOT gates. (b) Full crosstalk. The noise channels are $4$-qubit and depends on the qubits the CNOT acts on. (c) No crosstalk. The noise channel only acts on a $2$-qubit subspace. It can still depend on the qubits the CNOT acts on.}
\label{fig:crosstalk}
\end{figure}
Assumption 5 reduces the number of independent parameters of a noise model.
One might expect certain unlearnable functions to become learnable with this assumption.
Here, we show that the simple criteria of learnablity given in Theorem~\ref{th:nogo} still hold even in this case, as stated in the following proposition.
\begin{proposition}\label{prop:nogo_nocross}
With Assumption 1-5, for any $k$-qubit Clifford gate $\mathcal G$ and Pauli operator $P_a$, the Pauli fidelity $\lambda_a^{\mathcal G}$ is unlearnable if and only if $\mathcal G$ changes the pattern of $P_a$, \textit{i.e.}, $\mathrm{pt}(\mathcal G(P_a))\ne \mathrm{pt}(P_a)$.
\end{proposition}
\begin{proof}
We just need to modify the proof of Theorem~\ref{th:nogo}.
For the ``only if'' part, restrict our attention to the $k$-qubit subsystem that $\mathcal G$ acts on, and do a cycle benchmarking protocol as in the original proof. We can easily conclude that $\lambda_a^{\mathcal G}$ is learnable if $\mathrm{pt}(P_a) = \mathrm{pt}(\mathcal G(P_a))$.
For the ``if'' part, construct the same gauge transformation map as in the original proof.
That is, for an index $i\in[n]$ such that $\mathrm{pt}(P_a)_i\ne \mathrm{pt}(\mathcal G(P_a))_i$, let $\mathcal M = \mathcal D_i\otimes\mathcal I_{[n]\backslash i}$ where $D_i$ is the single-qubit deplorizing channel on the $i$th qubit with some parameter $\eta$.
With the no-crosstalk assumption, a generic $k$-qubit noisy Clifford gate $\widetilde{\mathcal T}$ transforms as
\begin{equation}
\widetilde{\mathcal T}\otimes\mathcal I \mapsto \mathcal M\circ (\widetilde{\mathcal T}\otimes\mathcal I)\circ \mathcal M^{-1}.
\end{equation}
If $\mathcal T$ does not act on the $i$th qubit, $\mathcal M$ commutes with $\widetilde{\mathcal T}$ and the noisy Clifford gate remains unchanged.
If $\mathcal T$ acts non-trivially on the $i$th qubit,
\begin{equation}
\widetilde{\mathcal T}\otimes\mathcal I \mapsto (\mathcal D_i\circ\widetilde{\mathcal T}\circ\mathcal D_i^{-1})\otimes\mathcal I.
\end{equation}
This means the transformed noise channel acts non-trivially only on the $k$-qubit subsystem that $\mathcal G$ acts on, thus satisfies the no-crosstalk assumption.
The Pauli fidelities of the noise channel will be updated as Eq.~\eqref{eq:paulifidelityupdate}.
Following the same argument of the original proof, we conclude that $\lambda_{a}^{\mathcal G}$ is unlearnable if $\mathrm{pt}(P_a) \neq \mathrm{pt}(\mathcal G(P_a))$.
\end{proof}
It is also possible to generalize the graph theoretical characterization in Theorem~\ref{th:space} to the no-crosstalk case.
One challenge in this case is that, different edges in the pattern transfer graph no longer stand for independent variables.
For example, consider a $3$-qubit system and a CNOT on the first two qubits.
Since $\mathrm{CNOT}(XI) = XX$, we would have the following two edges in the pattern transfer graph
$$e_{XII,\mathrm{CNOT}\otimes\mathcal I} = (100,110),\quad e_{XIX,\mathrm{CNOT}\otimes\mathcal I} = (101,111).$$
However, with the no-crosstalk assumption, we have
\begin{equation}
\lambda_{XII}^{\mathrm{CNOT}\otimes \mathcal I} =
\lambda_{XIX}^{\mathrm{CNOT}\otimes \mathcal I} =
\lambda_{XI}^{\mathrm{CNOT}},
\end{equation}
which means the above two edges represent the same Pauli fidelity.
As a result, a gauge transformation (as defined in the proof of Theorem~\ref{th:space}) that changes $\lambda_{XII}$ and $\lambda_{XIX}$ differently is no longer a valid transformation.
In other word, a cut represents a valid gauge transformation only if it cuts through all the edges for the same Pauli fidelity simultaneously.
This could decrease the number of unlearnable degrees of freedom.
We leave the precise characterization of the learnable space with no-crosstalk assumptions as an open question.
It is also interesting to study the learnability under other practical assumptions about the Pauli noise model, such as the sparse Pauli-Lindbladian model~\cite{berg2022probabilistic} and the Markovian graph model~\cite{flammia2020efficient,harper2020efficient}.
\subsection{Learnability of Pauli error rates}
We have been focusing on the learnability of Pauli fidelities $\bm\lambda$. One may ask similar questions about Pauli error rates $\bm p$.
It turns out that, at least in the weak-noise regime (\textit{i.e.}, $\lambda_a$ close to $1$), the learnability of $\bm p$ is $\bm \lambda$ are highly related. To see this, note that
\begin{equation}
\begin{aligned}
p_a &= \frac{1}{4^n}\sum_{b}(-1)^\expval{a,b}\lambda_b\\
&\approx \frac{1}{4^n}\sum_{b}(-1)^\expval{a,b}(\log\lambda_b + 1)\\
&=\frac{1}{4^n}\sum_{b}(-1)^\expval{a,b} l_b + \delta_{a,\bm 0},
\end{aligned}
\end{equation}
which means that $p_a$ is approximately a linear function of the logarithmic Pauli fidelity $\bm l$.
Therefore, one can in principle use Theorem~\ref{th:space} to completely decide the learnability of any Pauli error rates (with weak-noise approximation).
Furthermore, since the Walsh-Hadamard transformation is invertible, different $p_a$ corresponds to linearly-independent function of $\bm l$.
This means that the number of linearly independent equations we can obtain about the Pauli error rates is the same as the learnable degrees of freedom of the Pauli fidelities.
In Table~\ref{tab:CNOT_full}, we list a basis for all the learnable Pauli fidelities/Pauli error rates. One can see that there is an exact correspondence between these two. We leave a fully general argument for future study.
\begin{table}[!htp]
\centering
\begin{tabular}{|c|c|}
\hline
Learnable log Pauli fidelities & $l_{II},l_{ZI},l_{IX},l_{ZX},
l_{XZ},l_{YY},l_{XY},l_{YZ},$
\\&$l_{IZ}+l_{ZZ},l_{IY}+l_{ZY},l_{IZ}+l_{ZY},l_{XI}+l_{XX},l_{YI}+l_{YX},l_{XI}+l_{YX}$ \\
\hline
Learnable Pauli error rates & $p_{II},p_{ZI},p_{IX},p_{ZX},
p_{XZ},p_{YY},p_{XY},p_{YZ},$
\\ (approximately)&$p_{IZ}+p_{ZZ},p_{IY}+p_{ZY},p_{IZ}+p_{ZY},p_{XI}+p_{XX},p_{YI}+p_{YX},p_{XI}+p_{YX}$ \\
\hline
\end{tabular}
\caption{A complete basis for the learnable linear functions of log Pauli fidelities and Pauli error rates (the latter is approximate) for a single CNOT gate.}
\label{tab:CNOT_full}
\end{table}
| 2,470 | 39,025 |
en
|
train
|
0.4975.14
|
\section{Additional details about the numerical simulations}\label{sec:numerics}
In this section, we provide more details about the numerical simulations mentioned in the main text. The simulation is conducted using \texttt{qiskit}~\cite{Qiskit}, an open-source Python package for quantum computing. We simulate a two-qubit system where single-qubit Clifford gates are noiseless, and CNOT is subject to amplitude damping channels on both qubits. Note that amplitude damping is not Pauli noise, but we apply randomized compiling and will only estimate its Pauli diagonal part. We also note that, \texttt{qiskit} adds the noise channel \emph{after} gate by default, but our theory assume the noise to be \emph{before} gate. These two models can be easily converted between each other via
\begin{equation}
\mathcal G\circ\Lambda_{\mathcal G} = (\mathcal G\circ\Lambda_{\mathcal G}\circ\mathcal G^{\dagger})\circ\mathcal G = \Lambda_{\mathcal G}'\circ\mathcal G.
\end{equation}
If $\mathcal G$ is Clifford, $\Lambda_{\mathcal G}$ is a Pauli channel if and only if $\Lambda_{\mathcal G}'$ is a Pauli channel. In the following, we will be consistent with our theory and assume the noise to be before gate.
Besides, we let the measurement to have $0.3\%$ bit-flip rate on each qubit and the state-preparation to be noiseless.
Fig.~\ref{fig:main_sim_cbraw} shows the estimates collected using standard CB and interleaved CB (circuits shown in Fig.~1 of main text). Compared to the true values, we see that both simulations yields accurate predictions of the learnable Pauli fidelities.
\begin{figure}
\caption{Numerical estimates of Pauli fidelities of a CNOT gate via standard CB (left) and CB with interleaved gates (right), using circuits shown in Fig.~1 of main text. Each Pauli fidelity is fitted using seven different circuit depths $L=[2,2^2,...,2^7]$. For each depth $C=30$ random circuits and $200$ shots of measurements are used.
The red cross shows the true fidelities and the red dash line shows the average of true fidelities within any two-Pauli group.
}
\label{fig:main_sim_cbraw}
\end{figure}
Fig.~\ref{fig:main_sim_cbfeasible} (a) calculates the physically feasible region according to the estimates in terms of $\{\lambda_{XX},\lambda_{ZZ}\}$, using approaches discussed in the main text.
Due to the special structure of the twirled amplitude damping noise (no $Z$-error), the feasible region for $\lambda_{XX}$ is extremely narrow. To eliminate the effect of statistical error, we allow a smoothing parameter $\varepsilon$ in calculating the physical region, making the constraints to be $p_a\ge-\varepsilon$. Here $\varepsilon$ is chosen to be the largest standard deviation in estimating the learnable Pauli fidelities. In Fig.~\ref{fig:main_sim_cbfeasible} (b)(c) we see that the true fidelity indeed falls into the physical region and is actually close to the lower-left corner of the physical region.
\begin{figure}
\caption{Feasible region of the learned Pauli noise model, using data from Fig.~\ref{fig:main_sim_cbraw}
\label{fig:main_sim_cbfeasible}
\end{figure}
Fig.~\ref{fig:app_sim_intercept} shows the simulation results of intercept CB. We see that, we obtain an accurate estimate even for the unlearnable Pauli fidelities. Besides, the estimate lies inside the physically feasible region up to a standard deviation.
This shows that intercept CB should work well in resolving the unlearnability if we do have access to noiseless state-preparation (and the method is robust against measurement noise). Therefore, failure of this method in experiment implies a non-negligible state-preparation error, as discussed in the main text.
\begin{figure}
\caption{The learned Pauli noise model using intercept CB. The feasible region (blue bars) are taken from Fig.~\ref{fig:main_sim_cbfeasible}
\label{fig:app_sim_intercept}
\end{figure}
\section{Justification for the claim in Sec.~\ref{sec:space}}\label{sec:justification}
We claim in Sec.~\ref{sec:space} that any measurement probability generated in experiment can be expressed as a polynomial of Pauli fidelities, and that each term in the polynomial can be learned in a CB experiment. This is the motivation why we only care for a single monomial of Pauli fidelities. Here we justify this claim.
Consider the most general experimental design: prepare some initial state $\rho_0$, apply some quantum circuit $\mathcal C$, and conduct a POVM measurement $\{E_j\}_j$. Denote the noisy realization of these objects with a tilde. Because of noise, the probability of obtaining a certain measurement outcome $j$ is
\begin{equation}
\mathrm{Pr}(j) = \Tr\left( \widetilde{E}_j\widetilde{\mathcal C}(\widetilde{\rho}_0) \right) = \Tr\left( E_j\left(\Lambda^M\circ\widetilde{\mathcal C}\circ\Lambda^S\right)(\rho_0) \right) \equiv \Tr\left( E_j\rho' \right).
\end{equation}
Here $\Lambda^S,\Lambda^M$ are the noise channels for state preparation and measurement, respectively. The Pauli fidelity of them are denoted by $\lambda_a^S,\lambda_a^M$ for Pauli operator $a$, respectively. We define $\rho'\mathrel{\mathop:}\nobreak\mkern-1.2mu= (\Lambda^M\circ\widetilde{\mathcal C}\circ\Lambda^S)(\rho_0)$ which encodes all the information that can be extracted from a quantum measurements. We will obtain a general formula for $\rho'$.
First note that a general noisy quantum circuit $\widetilde{\mathcal C}$ satisfying our assumptions can be expressed as
\begin{equation}
\widetilde{\mathcal C} = C\od m \circ \widetilde{\mathcal G}_{{m}} \circ \cdots \circ C\od 1 \circ \widetilde{\mathcal G}_{{1}} \circ C\od 0,
\end{equation}
where ${\mathcal G}_{{j}}\in{\mathfrak G}$ is an $n$-qubit Clifford gate and $C\od j$ is the tensor product of single-qubit gates. A crucial property for single-qubit gates is that they never change the Pauli pattern. More rigorously, one have that
\begin{equation}
C\od{j}(P_a) = \sum_{b\sim \mathrm{pt}(a)}c_{b,a}\od{j}P_b,\quad\forall P_a\in{\sf P}^n,
\end{equation}
where $c_{b,a}\od{j}\in\mathbfb R$, and the summation is over all $P_b$ that have the same Pauli pattern as $P_a$.
\noindent Now consider the action of $\widetilde{\mathcal C}$ on an arbitrary Pauli operator $P_a$.
\begin{equation}
\begin{aligned}
\widetilde{\mathcal C}(P_a) &= (C\od m \circ \widetilde{\mathcal G}_{{m}} \circ \cdots \circ C\od 1 \circ \widetilde{\mathcal G}_{{1}} \circ C\od 0)(P_a)\\
&= (C\od m \circ \widetilde{\mathcal G}_{{m}} \circ \cdots \circ C\od 1 \circ \widetilde{\mathcal G}_{{1}}) \left(\sum_{b_0\sim \mathrm{pt}(a)} c_{b_0,a}\od{0} P_{b_0} \right)\\
&= (C\od m \circ \widetilde{\mathcal G}_{{m}} \circ \cdots \circ C\od 1) \left(\sum_{b_0\sim \mathrm{pt}(a)} c_{b_0,a}\od 0\lambda_{b_0}^{\mathcal G_{1}} P_{\mathcal G_{1}(b_0)} \right)\\
&= (C\od m \circ \widetilde{\mathcal G}_{{m}} \circ \cdots \circ C\od 2) \left(\sum_{\substack{
b_0\sim \mathrm{pt}(a),\\
b_1\sim \mathrm{pt}(\mathcal G_{1}(b_0))
}} c_{b_1,\mathcal G_{1}(b_0)}\od{1}c_{b_0,a}\od{0} \lambda_{b_1}^{\mathcal G_{2}}\lambda_{b_0}^{\mathcal G_{1}} P_{\mathcal G_{2}(b_1)} \right)\\
&= \cdots\\
&= \sum_{\substack{
b_0\sim \mathrm{pt}(a),\\
b_1\sim \mathrm{pt}(\mathcal G_{1}(b_0)),\\
\dots\\
b_m\sim \mathrm{pt}(\mathcal G_{m}(b_{m-1}))
}}c_{b_m,\mathcal G_{m}(b_{m-1})}\od{m}\cdots c_{b_1,\mathcal G_{1}(b_0)}\od{1} c_{b_0,a}\od{0} \lambda_{b_{m-1}}^{\mathcal G_{m}}\cdots\lambda_{b_1}^{\mathcal G_{2}}\lambda_{b_0}^{\mathcal G_{1}} P_{b_m}.
\end{aligned}
\end{equation}
For any initial state $\rho_0$, we can decompose it via Pauli operators as
\begin{equation}
\rho_0 = \frac{1}{2^n} I + \sum_{a\ne\bm 0}\alpha_aP_a.
\end{equation}
Going through the state preparation noise, the quantum circuit, and the measurement noise, the state evolves to
\begin{equation}
\begin{aligned}\label{eq:rho'2new}
\rho' &= (\Lambda^{M}\circ\widetilde{\mathcal C}\circ \Lambda^{S})(\frac{1}{2^n}I + \sum_{a\ne\bm 0}\alpha_aP_a)\\
&= \frac{1}{2^n}I +\sum_{a\ne\bm 0}\alpha_a\sum_{\substack{
b_0\sim \mathrm{pt}(a),\\
b_1\sim \mathrm{pt}(\mathcal G_{1}(b_0)),\\
\dots\\
b_m\sim \mathrm{pt}(\mathcal G_{m}(b_{m-1}))
}}c_{b_m,\mathcal G_{m}(b_{m-1})}\od{m}\cdots c_{b_1,\mathcal G_{1}(b_0)}\od{1} c_{b_0,a}\od{0} ~\lambda_{\mathrm{pt}(b_m)}^M\lambda_{b_{m-1}}^{\mathcal G_{m}}\cdots\lambda_{b_1}^{\mathcal G_{2}}\lambda_{b_0}^{\mathcal G_{1}}\lambda_{\mathrm{pt}(a)}^S P_{b_m}\\
&\equiv\frac{1}{2^n}I +\sum_{a\ne\bm 0}\alpha_a\sum_{\substack{
b_0\sim \mathrm{pt}(a),\\
b_1\sim \mathrm{pt}(\mathcal G_{1}(b_0)),\\
\dots\\
b_m\sim \mathrm{pt}(\mathcal G_{m}(b_{m-1}))
}}c_{b_m,\mathcal G_{m}(b_{m-1})}\od{m}\cdots c_{b_1,\mathcal G_{1}(b_0)}\od{1} c_{b_0,a}\od{0} ~\Gamma_{\bm b,a} P_{b_m}.
\end{aligned}
\end{equation}
Here we define $\Gamma_{\bm b,a}=\lambda_{\mathrm{pt}(b_m)}^M\lambda_{b_{m-1}}^{\mathcal G_{m}}\cdots\lambda_{b_1}^{\mathcal G_{2}}\lambda_{b_0}^{\mathcal G_{1}}\lambda_{\mathrm{pt}(a)}^S$, which is a monomial of Pauli fidelities.
The measurement outcome probability $\mathrm{Pr}(j)$ is a linear combination of such $\Gamma_{\bm b,a}$ plus some constant.
Moreover, each $\Gamma_{\bm b,a}$ of the above form can also be learned from a simple experiment, by choosing the initial state to be a $+1$ eigenstate of $P_a$, measurement operator to be $P_{b_m}$, and $C^{(j)}$ to be the product of single-qubit Clifford gates satisfying $C^{(j)}(\mathcal G_j({b_{j-1}})) = {b_{j}}$ (which is possible because $\mathrm{pt}(b_j)=\mathrm{pt}(\mathcal G_j(b_{j-1}))$).
Therefore, to completely characterize a noise model, we only need to extract the products of Pauli fidelities in the form of $\Gamma_{\bm b,a}$. This justifies our earlier claim.
\end{appendix}
\end{document}
| 3,201 | 39,025 |
en
|
train
|
0.4976.0
|
\begin{equation}gin{document}
\title[On $C^{2}$ solution of the free-transport equation in a disk]{On $C^{2}$ solution of the free-transport equation in a disk}
\author[G. Ko]{Gyounghun Ko}
\address[GK]{Department of Mathematics, Pohang University of Science and Technology, South Korea}
\email{[email protected]}
\author[D. Lee]{Donghyun Lee}
\address[DL]{Department of Mathematics, Pohang University of Science and Technology, South Korea}
\email{[email protected]}
\begin{equation}gin{abstract}
The free transport operator of probability density function $f(t,x,v)$ is one the most fundamental operator which is widely used in many areas of PDE theory including kinetic theory, in particular. When it comes to general boundary problems in kinetic theory, however, it is well-known that high order regularity is very hard to obtain in general. In this paper, we study the free transport equation in a disk with the specular reflection boundary condition. We obtain initial-boundary compatibility conditions for $C^{1}_{t,x,v}$ and $C^{2}_{t,x,v}$ regularity of the solution. We also provide regularity estimates. \\
\end{abstract}
\mathrm{d}ate{\today}
\keywords{}
\maketitle
\thispagestyle{empty}
\setcounter{tocdepth}{2}
\tableofcontents
| 372 | 129,481 |
en
|
train
|
0.4976.1
|
\section{Introduction}
The free transport equation (or free transport operator) is one of the most important ones in a wide area of mathematics. When we consider a probability density function $f : \mathbb{R}_{+}\times \Omega \times\mathbb{R}^{d}\rightarrow \mathbb{R}_{+}$, the free transport equation is written by
\[
\partial_{t}f + v\cdotot\nablala_{x}f = 0.
\]
Above equation is very simple and has explicit solution $f(t,x,v) = f_0(x-vt, v)$ when initial data $f_0$ is smooth and spatial domain is $\mathbb{R}^{d}$ or $\mathbb{T}^{d}$. However, if we consider general boundary problems, it becomes very complicated. One of the most important and ideal boundary conditions in kinetic theory is the specular reflection boundary condition,
\begin{equation} \label{BC}
f(t,x,v) = f(t,x,R_{x}v), \quad R_{x} = I - 2n(x)\otimes n(x),\quad x\in \partial\Omega,
\end{equation}
where $n(x)$ is outward unit normal vector on the boundary $\partial\Omega$ when $\partial\Omega$ is smooth. \eqref{BC} is motivated by billiard model and we usually analyze the problem through characteristics:
\begin{equation}gin{equation} \label{XV heu}
\begin{equation}gin{split}
X(s;t,x,v) &:= \text{position of a particle at time $s$ which was at phase space $(t,x,v)$}, \\
V(s;t,x,v) &:= \text{velocity of a particle at time $s$ which was at phase space $(t,x,v)$}, \\
\end{split}
\end{equation}
where $X(s;t,x,v)$ and $V(s;t,x,v)$ satisfy the following Hamiltonian structure,
\begin{equation} \label{Ham}
\frac{d}{ds}X(s;t,x,v) = V(s;t,x,v),\quad \frac{d}{ds}V(s;t,x,v) = 0, \\
\end{equation}
under billiard-like reflection condition on the boundary. Explicit formulation of $(X(s;t,x,v), V(s;t,x,v))$ will be given right after Definition \e^{\frac 12}f{notation}.
Since $X(t;t,x,v) = x$, $V(t;t,x,v)=v$ by definition, we can easily guess the following solution,
\[
f(t,x,v) = f_{0}(X(0;t,x,v), V(0;t,x,v)),
\]
which is same as $f_0(x-vt, v)$ when $\Omega=\mathbb{R}^{d}$. However, unlike to whole space case, regularity of the solution $f(t,x,v)$ depends on the regularity of trajectory \eqref{XV heu}. More precisely, when $X(0;t,x,v)\in \partial\Omega$, differentiability of \eqref{XV heu} break down in general. This means that for any time $t>0$, there exist corresponding $(x_{*},v_{*}) \in \Omega\times \mathbb{R}^{d}$ such that $f(t, \cdotot, \cdotot)$ is not differentiable at the point. Or equivalently, for any $(x,v)\in \Omega\times \mathbb{R}^{d}$, there exists some corresponding time $t$ such that $f(\cdotot, x, v)$ is not differentiable at that time. \\
Now let us consider general kinetic model which has hyperbolic structure, such as hard sphere or general cut-off Boltzmann equations. (Of course, there are lots of other kinetic literature which consider various boundary condition problems.) Although the Boltzmann equation (or other general kinetic equations) is much more complicated than the free transport equation, the recent development of the Boltzmann (or kinetic) boundary problems shows the regularity issue of the problems very well. \\
\indent In $\mathbb{T}^{3}$ or $\mathbb{R}^{3}$, many results have been known using high order regularity function spaces. We refer to some classical works such as \cite{DV, StrainJAMS,GuoVPB,GuoVMB}. (We note that the apriori assumption of \cite{DV} also covers some boundary condition problems, including specular reflection \eqref{BC}.) More recently, in the case of non-cutoff Boltzmann equation (which has regularizing effect), it is known that the solution is $C^{
\infty}$ by \cite{CILS}.\\
\indent However, when it comes to general boundary condition problems, a way of getting sufficient high order regularity estimate is not known and low regularity approach has been widely used. By defining mild solution, low regularity $L^{\infty}$ solutions have been studied after \cite{Guo10}. In \cite{LY2004}, they studied the pointwise estimate for the Green function of the linearized Boltzmann equation in $\mathbb{R}$. They also obtained weighted $L^\infty$ decay of the Boltzmann equation using the Green function approach. In \cite{UY2006}, they constructed $L^2\cap L^\infty_{\begin{equation}ta}$ solution to the Boltzmann equation in the whole space using Duhamel's principle and the spectral theory. Recently, authors in \cite{LLWW2022} provide new analysis to derive $L^\infty$ estimate rather than using Green function and Duhamel's principle. In addition, there are lots of references \cite{CKLVPB,DHWY2017,DHWZ2019,DKL2020,DW2019,KimLee,KimLeeNonconvex,KLP2022,LY2017}, where the low regularity solutions were studied for the cut-off type Boltzmann equation. We also refer to recent results \cite{AMSY2020,DLSS2021,GHJO2020,KGH2020}, etc., which deal with low regularity solution of the kinetic equation whose collision operator has regularizing effect such as non-cutoff Boltzmann or Landau equation. In fact, however, there are only a few results known about regularity of the Boltzmann equation with boundary conditions. We refer to \cite{GKTT2016,GKTT2017,Kim2011,KimLee2021}. \\
\indent As briefly explained above, regularity issue of boundary condition problems is very fundamental problem. In fact, even without complicated collision type operators, regularity of the free transport equation with boundary conditions has not been studied thoroughly to the best of author's knowledge. \\
| 1,651 | 129,481 |
en
|
train
|
0.4976.2
|
\subsection{Statements of main theorems}
In this paper, we study classical $C^{2}_{t,x,v}$ regularity of the free transport equation in a 2D disk,
\begin{equation} \label{eq}
\partial_{t}f + v\cdotot\nablala_{x}f = 0,\quad x\in \Omega:=\{ x\in\mathbb{R}^{2} \ : \ |x| < 1\},
\end{equation}
with the specular reflection boundary condition \eqref{BC}. Note that $n(x)=x \in \partial\Omega$, since we consider unit disk $\partial\Omega = \{x\in\mathbb{R}^{2} : |x|=1 \}$. Our aim is to find initial-boundary compatibility conditions of initial data $f_0$ for $C^{1}$ and $C^{2}$ regularity of the solution $f(t,x,v)$. We expect the solution will be mild solution $f(t,x,v)=f_0(X(0;t,x,v), V(0;t,x,v))$ surely (See Definition \e^{\frac 12}f{notation} for $X$ and $V$.) By performing derivative of $f$ in terms of $t,x,v$ directly (up to second order), we will find some conditions of $f_0$ which contains first and second derivative in both $x$ and $v$. (See \eqref{C1 cond}, \eqref{C2 cond34}, \eqref{C2 cond 1}, and \eqref{C2 cond 2}.) \\
\hide
\begin{equation}
f(t,x,v) = f(t,x,R_{x}v), \quad R_{x} = I - 2n(x)\otimes n(x),\quad x\in \partial\Omega
\end{equation}
where $n(x)$ is outward unit normal vector
\unhide
In general, for smooth bounded domain $\Omega$, we define
\begin{equation}gin{equation*}
\Omega = \{ x\in \mathbb{R}^{2} : \xi(x) < 0 \},\quad \partial\Omega = \{ x\in \mathbb{R}^{2} : \xi(x) = 0 \}.
\end{equation*}
In the case of unit disk, we may choose
\begin{equation}gin{equation*}
\xi(x) = \frac{1}{2}|x|^{2} - \frac{1}{2},
\end{equation*}
and hence
\begin{equation}gin{equation*}
\nablala\xi(x) = x,\quad
\nablala^{2}\xi(x) = I. \\
\end{equation*}
Now let us define some notation to precisely describe characteristics $X(s;t,x,v)$ and $V(s;t,x,v)$.
\begin{equation}gin{definition} \label{notation}
Considering \eqref{Ham}, we define basic notations
\begin{equation}gin{equation} \notag
\begin{equation}gin{split}
t_{\mathbf{b}}(x,v) &:= \sup \big\{ s \geq 0 : x - sv \in \Omegamega \big\} , \\
x_{\mathbf{b}}(x,v) &:= x - t_{\mathbf{b}}(x,v)v = X ( t- t_{\mathbf{b}}(t,x,v);t,x,v) \ \text{1st bouncing point backward in time}, \\
v_{\mathbf{b}}(x,v) &:= v = \lim_{s\rightarrowt_{\mathbf{b}}(t,x,v)}V ( t- s;t,x,v), \\
t^{k}(t,x,v) & := t^{k-1} - t_{\mathbf{b}} (x^{k-1}, v^{k-1}), \ \text{k-th bouncing time backward in time}, \ t^{1}(t,x,v) := t-t_{\mathbf{b}}(x,v),\\
x^{k}(x,v) &:= x^{k-1} - t_{\mathbf{b}}(x^{k-1}, v^{k-1}) v^{k-1} = X(t^{k}; t^{k-1}, x^{k-1}, v^{k-1}) \\
&= \text{k-th bouncing point backward in time},\quad x_{\mathbf{b}} := x^{1}, \\
v^{k} &=
R_{x^{k}} v^{k-1} = R_{x^{k}} \lim_{s\rightarrow t^{k}-} V(s; t^{k-1}, x^{k-1}, v^{k-1}), \\
\end{split}
\end{equation}
where $R_{x^{k}}$ is defined in \eqref{BC}. We set $(t^0,x^0,v^0)=(t,x,v)$ and define the specular characteristics as
\begin{equation}gin{equation}\label{XV}
\begin{equation}gin{split}
X(s;t,x,v) &= \sum_{k} \mathbf{1}_{s \in ( t^{k+1}, t^{k}]}
X(s;t^{k}, x^{k}, v^{k}), \\
V(s;t,x,v) &= \sum_{k} \mathbf{1}_{s \in ( t^{k+1}, t^{k}]}
V(s;t^{k}, x^{k}, v^{k}).
\end{split}
\end{equation}
\end{definition}
We also use $\gamma_{\partialm}$ and $\gamma_{0}$ notation to denote
\begin{equation}gin{equation} \notag
\begin{equation}gin{split}
\gamma_{+} &:= \{ (x,v)\in \partial\Omega\times \mathbb{R}^{2} : v\cdotot n(x) > 0 \}, \\
\gamma_{0} &:= \{ (x,v)\in \partial\Omega\times \mathbb{R}^{2} : v\cdotot n(x) = 0 \}, \\
\gamma_{-} &:= \{ (x,v)\in \partial\Omega\times \mathbb{R}^{2} : v\cdotot n(x) < 0 \}. \\
\end{split}
\end{equation}
Note that unit disk $\Omega$ is uniformly convex and its linear trajectory \eqref{XV} is well-defined if $x\in\Omega$ (see velocity lemma \cite{Guo10} for example). However, we want to investigate regularity up to boundary $\overline{\Omega}$, so we carefully exclude $\gamma_0$ from $\overline{\Omega}\times \mathbb{R}^{2}$ since we do not define characteristics starting from (backward in time) $\gamma_0$. Hence, using \eqref{XV} and \eqref{Ham}, it is natural to write \eqref{eq} as the following mild formulation,
\begin{equation} \label{solution}
f(t,x,v) = f_{0}(X(0;t,x,v), V(0;t,x,v)),\quad (x,v) \in \mathcal{I} := \{\overline{\Omega}\times \mathbb{R}^{2} \}\backslash \gamma_{0}. \\
\end{equation}
\\
\indent Meanwhile, to study regularity of \eqref{solution}, the following quantity is very important,
\begin{equation}gin{equation} \label{def A}
\begin{equation}gin{split}
A_{v, y}
&:= \left[\left((v\cdotot n(y))I+(n(y) \otimes v) \right)\left(I-\frac{v\otimes n(y)}{v\cdotot n(y)}\right)\right] ,\quad (y,v)\in \{\partial\Omega\times \mathbb{R}^2\} \backslash \gamma_0. \\
\end{split}
\end{equation}
Notice that $A_{v,y}$ is a matrix-valued function $A_{v,y}: \{\mathbb{R}^d\times\partialartial \Omega\}\backslash \gamma_0 \rightarrow \mathbb{R}^d\times \mathbb{R}^d $. ($d=2$ in this paper in particular) In fact, $A_{v, y}$ can be written as
\begin{equation}gin{equation*}
\begin{equation}gin{split}
A_{v, y}
&= \nablala_{y}\big( (v\cdotot n(y))n(y)\big) =\left( (v \cdotot n(y) ) \nablala_y n(y) + (n(y)\otimes v ) \nablala_y n(y)\right),\\
\end{split}
\end{equation*}
which is identical to \eqref{def A} by \eqref{normal}. \\
Throughout this paper, we denote the $v$-derivative of the $i$-th column of the matrix $A_{v,y}$ by $\nablala_v A^i_{v,y}$, where $A^i$ be the $i$-th column of a matrix $A$ for $1\leq i \leq d$. For fixed $i$, it means that
\begin{equation}gin{equation*}
\nablala_{v} A^i_{v,x^1} =\left. \left(\nablala_v A^i_{v,y}\right)\right|_{y=x^1}.
\end{equation*}
It is important to note that we carefully distinguish between $\nablala_v A^i_{v,x^1}$ and $\nablala_v (A^i_{v,x^1(x,v)})$. \\
\hide
\begin{equation}gin{proposition}
[Faa di Bruno formula] For higher order $n$-derivatives, the following formula would be useful.
\[
(f\circ H)^{(n)} = \sum_{\sum_{j=1}^{n}j m_{j}=n} \frac{n!}{m_{1}!\cdotots m_{n}!} \big( f^{(m_{1}+\cdotots+m_{n})}\circ H \big) \partialrod_{j=1}^{n} \begin{equation}ig( \frac{H^{(j)}}{j!} \begin{equation}ig)^{m_{j}}
\]
\end{proposition}
\unhide
\begin{equation}gin{remark}
Assume $f_{0}$ satisfies \eqref{BC}. If $f_{0} \in C^{1} _{x,v}( \overline{\Omega}\times \mathbb{R}^{2})$, then
\begin{equation} \label{C1_v trivial}
\nablala_{v}f_{0}(x,v) = \nablala_{v}f_{0}(x,R_{x}v)R_{x},\quad \forall x\in\partial\Omega,\quad \forall v\in\mathbb{R}^{2},
\end{equation}
also hold. Similalry, if $f_{0} \in C^{2}_{x,v}( \overline{\Omega}\times \mathbb{R}^{2})$, then
\begin{equation} \label{C2_v trivial}
\nablala_{vv}f_{0}(x,v) = R_{x}\nablala_{vv}f_{0}(x,R_{x}v)R_{x},\quad \forall x\in\partial\Omega,\quad \forall v\in\mathbb{R}^{2},
\end{equation}
also holds as well as \eqref{C1_v trivial}. \\
\end{remark}
\begin{equation}gin{theorem} [$C^{1}$ regularity] \label{thm 1}
Let $f_{0}$ be $C^{1}_{x,v}( \overline{\Omega}\times\mathbb{R}^{2})$ which satisfies \eqref{BC}. If initial data $f_{0}$ satisfies
\begin{equation} \label{C1 cond}
\begin{equation}ig[ \nablala_x f_0( x,v) + \nablala_v f_0(x, v) \frac{ (Qv)\otimes (Qv) }{v\cdotot n} \begin{equation}ig] R_{x}
=
\nablala_x f_0(x, R_{x}v)
+ \nablala_v f_0(x, R_{x}v) \frac{ (QR_{x}v)\otimes (QR_{x}v) }{R_{x}v\cdotot n},\quad (x,v)\in \gamma_{-},
\end{equation}
then $f(t,x,v)$ defined in \eqref{solution} is a unique $C^{1}_{t,x,v}(\mathbb{R}_{+}\times \mathcal{I})$ solution of \eqref{eq}. We also note that if \eqref{C1 cond} holds, then it also holds for $(x,v)\in \gamma_{+}$. Here, $Q$ is counterclockwise rotation by $\frac{\partiali}{2}$ in $\mathbb{R}^{2}$. Moreover, if the initial condition \eqref{C1 cond} does not hold, then $f(t,x,v)$ is not of class $C^1_{t,x,v}$ at time $t$ such that $t^k(t,x,v)=0$ for some $k$.
| 3,343 | 129,481 |
en
|
train
|
0.4976.3
|
&:= \left[\left((v\cdotot n(y))I+(n(y) \otimes v) \right)\left(I-\frac{v\otimes n(y)}{v\cdotot n(y)}\right)\right] ,\quad (y,v)\in \{\partial\Omega\times \mathbb{R}^2\} \backslash \gamma_0. \\
\end{split}
\end{equation}
Notice that $A_{v,y}$ is a matrix-valued function $A_{v,y}: \{\mathbb{R}^d\times\partialartial \Omega\}\backslash \gamma_0 \rightarrow \mathbb{R}^d\times \mathbb{R}^d $. ($d=2$ in this paper in particular) In fact, $A_{v, y}$ can be written as
\begin{equation}gin{equation*}
\begin{equation}gin{split}
A_{v, y}
&= \nablala_{y}\big( (v\cdotot n(y))n(y)\big) =\left( (v \cdotot n(y) ) \nablala_y n(y) + (n(y)\otimes v ) \nablala_y n(y)\right),\\
\end{split}
\end{equation*}
which is identical to \eqref{def A} by \eqref{normal}. \\
Throughout this paper, we denote the $v$-derivative of the $i$-th column of the matrix $A_{v,y}$ by $\nablala_v A^i_{v,y}$, where $A^i$ be the $i$-th column of a matrix $A$ for $1\leq i \leq d$. For fixed $i$, it means that
\begin{equation}gin{equation*}
\nablala_{v} A^i_{v,x^1} =\left. \left(\nablala_v A^i_{v,y}\right)\right|_{y=x^1}.
\end{equation*}
It is important to note that we carefully distinguish between $\nablala_v A^i_{v,x^1}$ and $\nablala_v (A^i_{v,x^1(x,v)})$. \\
\hide
\begin{equation}gin{proposition}
[Faa di Bruno formula] For higher order $n$-derivatives, the following formula would be useful.
\[
(f\circ H)^{(n)} = \sum_{\sum_{j=1}^{n}j m_{j}=n} \frac{n!}{m_{1}!\cdotots m_{n}!} \big( f^{(m_{1}+\cdotots+m_{n})}\circ H \big) \partialrod_{j=1}^{n} \begin{equation}ig( \frac{H^{(j)}}{j!} \begin{equation}ig)^{m_{j}}
\]
\end{proposition}
\unhide
\begin{equation}gin{remark}
Assume $f_{0}$ satisfies \eqref{BC}. If $f_{0} \in C^{1} _{x,v}( \overline{\Omega}\times \mathbb{R}^{2})$, then
\begin{equation} \label{C1_v trivial}
\nablala_{v}f_{0}(x,v) = \nablala_{v}f_{0}(x,R_{x}v)R_{x},\quad \forall x\in\partial\Omega,\quad \forall v\in\mathbb{R}^{2},
\end{equation}
also hold. Similalry, if $f_{0} \in C^{2}_{x,v}( \overline{\Omega}\times \mathbb{R}^{2})$, then
\begin{equation} \label{C2_v trivial}
\nablala_{vv}f_{0}(x,v) = R_{x}\nablala_{vv}f_{0}(x,R_{x}v)R_{x},\quad \forall x\in\partial\Omega,\quad \forall v\in\mathbb{R}^{2},
\end{equation}
also holds as well as \eqref{C1_v trivial}. \\
\end{remark}
\begin{equation}gin{theorem} [$C^{1}$ regularity] \label{thm 1}
Let $f_{0}$ be $C^{1}_{x,v}( \overline{\Omega}\times\mathbb{R}^{2})$ which satisfies \eqref{BC}. If initial data $f_{0}$ satisfies
\begin{equation} \label{C1 cond}
\begin{equation}ig[ \nablala_x f_0( x,v) + \nablala_v f_0(x, v) \frac{ (Qv)\otimes (Qv) }{v\cdotot n} \begin{equation}ig] R_{x}
=
\nablala_x f_0(x, R_{x}v)
+ \nablala_v f_0(x, R_{x}v) \frac{ (QR_{x}v)\otimes (QR_{x}v) }{R_{x}v\cdotot n},\quad (x,v)\in \gamma_{-},
\end{equation}
then $f(t,x,v)$ defined in \eqref{solution} is a unique $C^{1}_{t,x,v}(\mathbb{R}_{+}\times \mathcal{I})$ solution of \eqref{eq}. We also note that if \eqref{C1 cond} holds, then it also holds for $(x,v)\in \gamma_{+}$. Here, $Q$ is counterclockwise rotation by $\frac{\partiali}{2}$ in $\mathbb{R}^{2}$. Moreover, if the initial condition \eqref{C1 cond} does not hold, then $f(t,x,v)$ is not of class $C^1_{t,x,v}$ at time $t$ such that $t^k(t,x,v)=0$ for some $k$.
\begin{equation}gin{remark}[Example of initial data satisfying \eqref{C1 cond}] \label{example}
In \eqref{C1 cond}, we consider the following special case
\begin{equation}gin{equation} \label{specialcase}
\nablala_x f_0(x,v)R_x = \nablala_x f_0(x,R_xv) \quad \textrm{and} \quad \nablala_vf_0(x,v)\frac{(Qv)\otimes (Qv)}{v\cdotot n}R_x = \nablala_v f_0(x,R_xv) \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n},
\end{equation}
for $(x,v)\in \gamma_-$. Since $Q^TR_xQ=-R_x$ and $v\cdotot n=-R_xv\cdotot n$, we derive
\begin{equation}gin{equation*}
\nablala_v f_0(x,v) \cdotot (Qv) = \nablala_v f_0(x,R_xv)\cdotot (QR_xv),
\end{equation*}
from the second condition above. Here, $A^T$ means transpose of a matrix $A$. From \eqref{C1_v trivial}, we get
\begin{equation}gin{equation*}
\nablala_v f_0(x,R_xv)\cdotot (R_xQv) = \nablala_v f_0(x,R_xv) \cdotot (QR_xv),
\end{equation*}
which implies that $\nablala_vf_0(x,v)\cdotot (Qv) = \nablala_v f_0(x,R_xv)\cdotot (R_xQv) =0$ because $R_xQ= -QR_x$. It means that $\nablala_v f_0(x,v)$ is parallel to $v$. Then, $ f_0(x,v)$ is a radial function with respect to $v$. Since the second condition in \eqref{specialcase} also holds for $(x,v)\in \gamma_+$, we deduce that a direction of $\nablala_v f_0(x,v)$ is $v^T$ for $v\in \gamma_- \cup\gamma_+$. In other words,
\begin{equation}gin{equation*}
f_0(x,v)=G(x,\vert v \vert), \quad (x,v)\in \gamma_-\cup \gamma_+,
\end{equation*}
where $G$ is a real-valued $C^1_{x,v}$ function. Moreover, $f_0$ can be continuously extended to $\gamma_0$ to satisfy $f_0 \in C^1_{x,v}(\partialartial \Omega\times \mathbb{R}^2)$. From the first condition $\nablala_x f_0 (x,v)R_x = \nablala_x f_0(x,R_xv)$ in \eqref{specialcase}, we have
\begin{equation}gin{equation*}
\nablala_x G(x,\vert v \vert) R_x = \nablala_x G(x,\vert v \vert).
\end{equation*}
Thus, $\nablala_x G(x,\vert v \vert)$ is orthogonal to $n(x)=x$, which means that the directional derivative $\nablala_x f_0(x,v) \cdotot n(x)$ be 0 for $x\in \partialartial \Omega$. In conclusion, $f_0(x,v)=G(x,\vert v \vert)$ such that $\nablala_x f_0(x,v) \cdotot n(x)=0$ for all $(x,v)\in \partialartial \Omega\times \mathbb{R}^2$ whenever $f_0$ satisfies \eqref{specialcase} for $(x,v)\in \gamma_-$.
\end{remark}
\hide
\[
Q =
\begin{equation}gin{pmatrix}
\vert & \vert & \vert \\
\hat{x}\times \widehat{v\times x} & \hat{x} & \widehat{v\times x} \\
\vert & \vert & \vert
\end{pmatrix}
\begin{equation}gin{pmatrix}
0 & -1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1
\end{pmatrix}
\begin{equation}gin{pmatrix}
\vert & \vert & \vert \\
\hat{x}\times \widehat{v\times x} & \hat{x} & \widehat{v\times x} \\
\vert & \vert & \vert
\end{pmatrix}^{-1}.
\]
\unhide
\end{theorem}
\begin{equation}gin{theorem} [$C^{2}$ regularity] \label{thm 2}
Let $f_{0}$ be $C^{2}_{x,v}( \overline{\Omega}\times\mathbb{R}^{2})$ which satisfies \eqref{BC} and \eqref{C1 cond}. (The condition \eqref{C1 cond} was necessary to satisfy $f(t,x,v)\in C^1_{t,x,v}$ in Theorem \e^{\frac 12}f{thm 1}). If we assume
\begin{equation} \label{C2 cond34}
\nablala_{x}f_0(x, R_{x}v) \partialarallel (R_{x}v)^{T},\quad \nablala_{v}f_0(x, R_{x}v) \partialarallel (R_{x}v)^{T},
\end{equation}
and
\begin{equation}gin{eqnarray}
&&R_{x} \begin{equation}ig[ \nablala_{xv}f_{0}(x,v) + \nablala_{vv}f_{0}(x,v) \frac{ (Qv)\otimes (Qv)}{v\cdotot n} \begin{equation}ig] R_{x}
= \nablala_{xv}f_{0}(x, R_xv) + \nablala_{vv}f_{0}(x, R_xv) \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n} \notag \\
&&\quad\hspace{7.5cm} +
R_{x}
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x , R_xv) \mathcal{J}_1
\\
\nablala_{v}f_{0}(x , R_xv) \mathcal{J}_2
\end{bmatrix}
R_{x},
\label{C2 cond 1} \\
&&R_{x}\begin{equation}ig[ \nablala_{xx}f_{0}(x,v) + \nablala_{vx}f_{0}(x, v) \frac{ (Qv)\otimes (Qv)}{v\cdotot n} + \frac{ (Qv)\otimes (Qv)}{v\cdotot n} \nablala_{xv}f_{0}(x, v) \begin{equation}ig] R_{x} \notag \\
&&\quad = \nablala_{xx}f_{0}(x, R_xv) + \nablala_{vx}f_{0}(x, R_xv)\frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n}
+ \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n} \nablala_{xv}f_{0}(x, R_xv) \notag \\
&&\quad \quad
-2R_x
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x, R_xv) \nablala_{v}A^{1}_{v,x}
\\
\nablala_{v}f_{0}(x, R_xv) \nablala_{v}A^{2}_{v,x}
\end{bmatrix} R_xA_{v,x}R_x
+
A_{v,x}\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x, R_xv) \mathcal{J}_1 \\
\nablala_{v}f_{0}(x, R_xv) \mathcal{J}_2
\end{bmatrix}R_x
\notag \\
&&\quad \quad
- 2 R_x
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x, R_xv) \mathcal{K}_1
\\
\nablala_{v}f_{0}(x, R_xv) \mathcal{K}_2
\end{bmatrix} R_x,
\label{C2 cond 2}
| 3,754 | 129,481 |
en
|
train
|
0.4976.4
|
\begin{equation}gin{theorem} [$C^{2}$ regularity] \label{thm 2}
Let $f_{0}$ be $C^{2}_{x,v}( \overline{\Omega}\times\mathbb{R}^{2})$ which satisfies \eqref{BC} and \eqref{C1 cond}. (The condition \eqref{C1 cond} was necessary to satisfy $f(t,x,v)\in C^1_{t,x,v}$ in Theorem \e^{\frac 12}f{thm 1}). If we assume
\begin{equation} \label{C2 cond34}
\nablala_{x}f_0(x, R_{x}v) \partialarallel (R_{x}v)^{T},\quad \nablala_{v}f_0(x, R_{x}v) \partialarallel (R_{x}v)^{T},
\end{equation}
and
\begin{equation}gin{eqnarray}
&&R_{x} \begin{equation}ig[ \nablala_{xv}f_{0}(x,v) + \nablala_{vv}f_{0}(x,v) \frac{ (Qv)\otimes (Qv)}{v\cdotot n} \begin{equation}ig] R_{x}
= \nablala_{xv}f_{0}(x, R_xv) + \nablala_{vv}f_{0}(x, R_xv) \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n} \notag \\
&&\quad\hspace{7.5cm} +
R_{x}
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x , R_xv) \mathcal{J}_1
\\
\nablala_{v}f_{0}(x , R_xv) \mathcal{J}_2
\end{bmatrix}
R_{x},
\label{C2 cond 1} \\
&&R_{x}\begin{equation}ig[ \nablala_{xx}f_{0}(x,v) + \nablala_{vx}f_{0}(x, v) \frac{ (Qv)\otimes (Qv)}{v\cdotot n} + \frac{ (Qv)\otimes (Qv)}{v\cdotot n} \nablala_{xv}f_{0}(x, v) \begin{equation}ig] R_{x} \notag \\
&&\quad = \nablala_{xx}f_{0}(x, R_xv) + \nablala_{vx}f_{0}(x, R_xv)\frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n}
+ \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n} \nablala_{xv}f_{0}(x, R_xv) \notag \\
&&\quad \quad
-2R_x
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x, R_xv) \nablala_{v}A^{1}_{v,x}
\\
\nablala_{v}f_{0}(x, R_xv) \nablala_{v}A^{2}_{v,x}
\end{bmatrix} R_xA_{v,x}R_x
+
A_{v,x}\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x, R_xv) \mathcal{J}_1 \\
\nablala_{v}f_{0}(x, R_xv) \mathcal{J}_2
\end{bmatrix}R_x
\notag \\
&&\quad \quad
- 2 R_x
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x, R_xv) \mathcal{K}_1
\\
\nablala_{v}f_{0}(x, R_xv) \mathcal{K}_2
\end{bmatrix} R_x,
\label{C2 cond 2}
\end{eqnarray}
where $x=(x_1,x_2), \; v=(v_1,v_2)$, and
\begin{equation}gin{align*}
&\mathcal{J}_1:=\frac{1}{v\cdotot x} \begin{equation}gin{bmatrix}
-4v_2x_1x_2 & 4v_1x_1x_2 \\
-2v_2(x_2^2-x_1^2) & 2v_1(x_2^2-x_1^2)
\end{bmatrix}, \quad
\mathcal{J}_2:= \frac{1}{v\cdotot x}\begin{equation}gin{bmatrix}
-2v_2(x_2^2-x_1^2) & 2v_1(x_2^2-x_1^2)\\
4v_2x_1x_2 & -4v_1x_1x_2
\end{bmatrix},\\
&\mathcal{K}_1:=\begin{equation}gin{bmatrix}
\mathrm{d}frac{4v_1^2v_2^2x_1^3 +2v_1v_2^3(3x_1^2x_2-x_2^3)+ 2v_2^4(3x_1x_2^2+x_1^3)}{(v\cdotot x)^3} & \mathrm{d}frac{-4v_1^3v_2x_1^3-2v_1^2v_2^2(3x_1^2x_2-x_2^3)-2v_1v_2^3(3x_1x_2^2+x_1^3)}{(v\cdotot x)^3}\\
\mathrm{d}frac{4v_2^4x_2^3+2v_1v_2^3(3x_1x_2^2-x_1^3)+2v_1^2v_2^2(3x_1^2x_2+x_2^3)}{(v\cdotot x)^3} & \mathrm{d}frac{-4v_1v_2^3x_2^3-2v_1^2v_2^2(3x_1x_2^2-x_1^3)-2v_1^3v_2(3x_1^2x_2+x_2^3)}{(v\cdotot x)^3}
\end{bmatrix},\\
&\mathcal{K}_2 := \begin{equation}gin{bmatrix}
\mathrm{d}frac{-4v_1^3v_2x_1^3-2v_1v_2^3(3x_1x_2^2+x_1^3) -2v_1^2v_2^2(3x_1^2x_2-x_2^3)}{(v\cdotot x)^3} & \mathrm{d}frac{4v_1^4x_1^3 +2v_1^2v_2^2(3x_1x_2^2+x_1^3)+2v_1^3v_2 (3x_1^2x_2-x_2^3)}{(v \cdotot x)^3}\\
\mathrm{d}frac{-4v_1v_2^3x_2^3 -2v_1^3v_2(3x_1^2x_2+x_2^3) -2v_1^2v_2^2(3x_1x_2^2-x_1^3)}{(v\cdotot x)^3} & \mathrm{d}frac{4v_1^2 v_2^2 x_2^3 +2v_1^4(3x_1^2x_2+x_2^3)+2v_1^3v_2(3x_1x_2^2-x_1^3)}{(v \cdotot x)^3}
\end{bmatrix},
\end{align*}
for all $(x,v)\in \gamma_-$, then $f(t,x,v)$ defined in \eqref{solution} is a unique $C^{2}_{t,x,v}(\mathbb{R}_{+}\times \mathcal{I})$ solution of \eqref{eq}.
In this case, $f_0(x,v)=G(x,\vert v \vert)$ satisfying $\nablala_x f_0(x, v)=0$ for $x \in \partialartial \Omega$, where $G$ is a real-valued $C^2_{x,v}$ function. Additionally, $f(t,x,v)$ is not of class $C^2_{t,x,v}$ at time $t$ such that $t^k(t,x,v)=0$ for some $k$ if one of the initial conditions \eqref{C2 cond34}, \eqref{C2 cond 1}, and \eqref{C2 cond 2} for $(x,v)\in \gamma_-$ is not satisfied.
\end{theorem}
\begin{equation}gin{remark} (Higher regularity)
If we want higher regularity such as $C^{3}$ and $C^{4}$, we should assume additional initial-boundary compatibility conditions for those regularities as we assumed \eqref{C2 cond34}-\eqref{C2 cond 2} in Theorem \e^{\frac 12}f{thm 2} for $C^{2}$ as well as \eqref{C1 cond}. Although the computation for higher regularity is available in principle, we should carefully check whether the additional conditions for higher regularity make lower regularity conditions trivial or not. Here, the trivial condition for \eqref{C2 cond 2} means
\[
\nablala_{x,v}f_0(x,v) = 0,\quad \forall (x,v)\in\gamma_{-}.
\]
In fact, the answer is given in Section 1.2. Because of very nontrivial null structure of \eqref{1st order}, imposing \eqref{C2 cond34}-\eqref{C2 cond 2} does not make \eqref{C1 cond} trivial, fortunately. Once we find a new initial-boundary compatibility condition for $C^{3}$, for example, we also have to check
\begin{equation} \notag
\begin{equation}gin{split}
&\text{Do additional compatibility conditions for $C^{3}$ regularity make } \\
&\quad \text{\eqref{C1 cond} or \eqref{C2 cond34}-\eqref{C2 cond 2} trivial, e.g. $\nablala f_0 = \nablala^{2}f_0=0$ on $\gamma_{-}$?}
\end{split}
\end{equation}
Whenever we gain conditions for higher order regularity, initial-boundary compatibility conditions are stacked and they might make lower order compatibility conditions just trivial ones. It is a very interesting question, but they require very complicated geometric considerations and obtaining higher order condition itself will be also very painful. But, if we impose very strong (trivial) high order initial-boundary compatibility conditions
\[
\nablala_{x,v}^{i}f_0(x,v) = 0,\quad \forall(x,v)\in\gamma_{-},\quad 1\leq i \leq k,
\]
then we will get $C^{k}$ regularity of the solution.
\end{remark}
\begin{equation}gin{remark} (Necessary conditions for $C^{2}$ regularity)
In Theorem \e^{\frac 12}f{thm 2}, initial conditions \eqref{C2 cond 1} and \eqref{C2 cond 2} are sufficient conditions for $f \in C^2_{t,x,v}$. Although these contain non-symmetric complicated first-order terms, we can obtain simpler necessary conditions.
Observe that the null space of $\mathcal{J}_i, \mathcal{K}_i$ is spanned by $v$, i.e.,
\begin{equation}gin{equation} \label{null J,K}
\mathcal{J}_i v =0, \quad \mathcal{K}_i v =0, \quad i=1,2.
\end{equation}
Multiplying the reflection matrix $R_x$ on both sides in \eqref{C2 cond 1} and \eqref{C2 cond 2}, we get necessary conditions for $C^{2}$ solution,
\hide
\begin{equation}gin{align*}
&\nablala_{xv} f_0(x,v) +\nablala_{vv} f_0(x,v)\frac{ (Qv)\otimes (Qv)}{v\cdotot n} = R_x \left[\nablala_{xv}f_{0}(x, R_xv) + \nablala_{vv}f_{0}(x, R_xv) \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n} \right]R_x \\
&\hspace{6.5cm} +
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x , R_xv) \mathcal{J}_1
\\
\nablala_{v}f_{0}(x , R_xv) \mathcal{J}_2
\end{bmatrix},\\
&\nablala_{xx}f_{0}(x,v) + \nablala_{vx}f_{0}(x, v) \frac{ (Qv)\otimes (Qv)}{v\cdotot n} + \frac{ (Qv)\otimes (Qv)}{v\cdotot n} \nablala_{xv}f_{0}(x, v)\\
&\quad = R_x \left[ \nablala_{xx}f_{0}(x, R_xv) + \nablala_{vx}f_{0}(x, R_xv)\frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n}
+ \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n} \nablala_{xv}f_{0}(x, R_xv)\right]R_x \\
&\quad \quad -2
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x, R_xv) \nablala_{v}A^{1}_{v,x}
\\
\nablala_{v}f_{0}(x, R_xv) \nablala_{v}A^{2}_{v,x}
\end{bmatrix} R_xA_{v,x}
+
R_xA_{v,x}\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x, R_xv) \mathcal{J}_1 \\
\nablala_{v}f_{0}(x, R_xv) \mathcal{J}_2
\end{bmatrix}
- 2
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x, R_xv) \mathcal{K}_1
\\
\nablala_{v}f_{0}(x, R_xv) \mathcal{K}_2
\end{bmatrix} ,
\end{align*}
\unhide
| 3,894 | 129,481 |
en
|
train
|
0.4976.5
|
Observe that the null space of $\mathcal{J}_i, \mathcal{K}_i$ is spanned by $v$, i.e.,
\begin{equation}gin{equation} \label{null J,K}
\mathcal{J}_i v =0, \quad \mathcal{K}_i v =0, \quad i=1,2.
\end{equation}
Multiplying the reflection matrix $R_x$ on both sides in \eqref{C2 cond 1} and \eqref{C2 cond 2}, we get necessary conditions for $C^{2}$ solution,
\hide
\begin{equation}gin{align*}
&\nablala_{xv} f_0(x,v) +\nablala_{vv} f_0(x,v)\frac{ (Qv)\otimes (Qv)}{v\cdotot n} = R_x \left[\nablala_{xv}f_{0}(x, R_xv) + \nablala_{vv}f_{0}(x, R_xv) \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n} \right]R_x \\
&\hspace{6.5cm} +
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x , R_xv) \mathcal{J}_1
\\
\nablala_{v}f_{0}(x , R_xv) \mathcal{J}_2
\end{bmatrix},\\
&\nablala_{xx}f_{0}(x,v) + \nablala_{vx}f_{0}(x, v) \frac{ (Qv)\otimes (Qv)}{v\cdotot n} + \frac{ (Qv)\otimes (Qv)}{v\cdotot n} \nablala_{xv}f_{0}(x, v)\\
&\quad = R_x \left[ \nablala_{xx}f_{0}(x, R_xv) + \nablala_{vx}f_{0}(x, R_xv)\frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n}
+ \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n} \nablala_{xv}f_{0}(x, R_xv)\right]R_x \\
&\quad \quad -2
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x, R_xv) \nablala_{v}A^{1}_{v,x}
\\
\nablala_{v}f_{0}(x, R_xv) \nablala_{v}A^{2}_{v,x}
\end{bmatrix} R_xA_{v,x}
+
R_xA_{v,x}\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x, R_xv) \mathcal{J}_1 \\
\nablala_{v}f_{0}(x, R_xv) \mathcal{J}_2
\end{bmatrix}
- 2
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x, R_xv) \mathcal{K}_1
\\
\nablala_{v}f_{0}(x, R_xv) \mathcal{K}_2
\end{bmatrix} ,
\end{align*}
\unhide
\begin{equation}gin{equation} \label{C2 nec cond}
\begin{equation}gin{split}
&v^T \left[ \nablala_{xv} f_0(x,v) +\nablala_{vv} f_0(x,v)\frac{ (Qv)\otimes (Qv)}{v\cdotot n}\right] v=(R_xv)^T \left[\nablala_{xv}f_{0}(x, R_xv) + \nablala_{vv}f_{0}(x, R_xv) \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n} \right](R_xv),\\
&v^T \left[ \nablala_{xx}f_{0}(x,v) + \nablala_{vx}f_{0}(x, v) \frac{ (Qv)\otimes (Qv)}{v\cdotot n} + \frac{ (Qv)\otimes (Qv)}{v\cdotot n} \nablala_{xv}f_{0}(x, v)\right]v \\
&=(R_xv)^T \left[\nablala_{xx}f_{0}(x, R_xv) + \nablala_{vx}f_{0}(x, R_xv)\frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n}
+ \frac{(QR_xv)\otimes (QR_xv)}{R_xv\cdotot n} \nablala_{xv}f_{0}(x, R_xv)\right](R_xv),
\end{split}
\end{equation}
for all $(x,v) \in \gamma_-$, where we used $R_x^2 =I$, \eqref{null J,K}, and $A_{v,x}v=0$ in Lemma \e^{\frac 12}f{lem_RA}.
\end{remark}
\begin{equation}gin{remark}\label{extension C2 cond34}
Using \eqref{C1_v trivial} and \eqref{C2 cond34} yields that
\begin{equation}gin{equation} \label{f0 gamma+}
\nablala_v f_0(x,v) \partialarallel v^T,
\end{equation}
for all $(x,v) \in \gamma_-\cup \gamma_+$. From \eqref{f0 gamma+}, we have
\begin{equation}gin{equation*}
\nablala_v f_0(x,v) \frac{(Qv) \otimes (Qv)}{v\cdotot n} R_x = \nablala_v f_0(x,R_xv)\frac{(QR_xv)\otimes(QR_xv)}{R_x v\cdotot n}.
\end{equation*}
Thus, the condition \eqref{C1 cond} in Theorem \e^{\frac 12}f{thm 1} becomes
\begin{equation}gin{equation*}
\nablala_x f_0(x,v) R_x = \nablala_x f_0(x,R_xv).
\end{equation*}
Similarly, by \eqref{C2 cond34} and the above result, we have
\begin{equation}gin{equation*}
\nablala_x f_0(x,v) \partialarallel v^T,
\end{equation*}
for all $(x,v) \in \gamma_-\cup \gamma_+$. Hence, we conclude that \eqref{C2 cond34} can be extended to $\gamma_-\cup\gamma_+$ under conditions \eqref{C1 cond} and \eqref{C2 cond34} for $(x,v)\in\gamma_-$.
\end{remark}
\begin{equation}gin{remark}[Extension to 3D sphere]
By symmetry, Theorem \e^{\frac 12}f{thm 1} and \e^{\frac 12}f{thm 2} also hold for three dimensional sphere if the rotation operator $Q$ is properly redefined in the plane spanned by $\{x, v\}$ for $x\in \partial\Omega$, $x\nparallel v\neq 0$. \\
\end{remark}
\begin{equation}gin{theorem} [Regularity estimates] \label{thm 3}
The $C^{1}(\mathbb{R}_{+}\times \mathcal{I})$ and $C^{2}(\mathbb{R}_{+}\times \mathcal{I})$ solutions of Theorem \e^{\frac 12}f{thm 1} and \e^{\frac 12}f{thm 2} enjoy the following regularity estimates :
\begin{equation} \label{C1 bound}
\|f\|_{C^{1}_{t,x,v}} \lesssim \|f_0\|_{C^{1}} \frac{|v|}{|v\cdotot n(x_{\mathbf{b}})|^{2}} \langle v \rangle^{2}(1 + |v|t),
\end{equation}
\begin{equation} \label{C2 bound}
\|f\|_{C^{2}_{t,x,v}} \lesssim \|f_0\|_{C^{2}} \frac{|v|^{2}}{|v\cdotot n(x_{\mathbf{b}})|^{4}} \langle v \rangle^{4}(1 + |v|t)^{2},
\end{equation}
where $x_{\mathbf{b}} = x_{\mathbf{b}}(x,v)$ and $\langle v \rangle := 1 + |v|$.
\end{theorem}
| 2,255 | 129,481 |
en
|
train
|
0.4976.6
|
\subsection{Brief sketch of proofs and some important remarks}
In this paper, our aim is to analyze regularity of mild form \eqref{solution} where characteristic $(X(0;t, x, v), V(0;t, x,v))$ is well-defined (by excluding $\gamma_0$). If backward in time position $X(0;t,x,v) \notin \partial\Omega$, the characteristic is also a smooth function and we expect that the regularity of \eqref{solution} will be the same as initial data $f_0$ by the chain rule. When $X(0;t,x,v) \in \partial\Omega$, however, the derivative via the chain rule does not work anymore because of discontinuous behavior of velocity $V(0;t,x,v)$. Depending on perturbed directions, we obtain different directional derivatives. In fact, we can split directions into two pieces: one gives bouncing and the other does not. See \eqref{R12_v} and \eqref{set R_vel} for $C^{1}_{v}$ for example. By matching these directional derivatives and performing some symmetrization, we obtain symmetrized initial-boundary compatibility condition \eqref{C1 cond}. Of course, \eqref{C1_v trivial} also holds, but \eqref{C1_v trivial} is gained by taking the $v$-derivative of \eqref{BC} directly. We note that both $C^{1}_{x}$ and $C^{1}_{v}$ conditions yield identical initial compatibility condition \eqref{C1 cond}, and the condition for $C^{1}_{t}$ is just a necessary condition for \eqref{C1 cond}. \\
The analysis becomes much more complicated when we study $C^{2}$ conditions. Nearly all of our analysis consist of precise equalities, instead of estimates. This makes our business much harder. First, let us consider four cases: $\nablala_{xx}, \nablala_{xv}, \nablala_{vx}, \nablala_{vv}$. These yield very complicated initial-boundary compatibility conditions and in particular they contain derivatives of each column of reflection operator $R_x$ or $\nablala_{x,v}((n(x)\otimes n(x))v)$. It is nearly impossible to give proper geometric interpretation for each term. See \eqref{xv star1} and \eqref{xv star2} for example. \\
Nevertheless, it is quite interesting that the four conditions from $\nablala_{xx}, \nablala_{xv}, \nablala_{vx}, \nablala_{vv}$ can be rearranged with respect to the order of time $t$. By matching all directional derivatives, we obtain \eqref{Cond2 1}--\eqref{Cond2 4} which contain both second-order terms and first-order terms. However, the conditions from $\nablala_{xx}, \nablala_{xv}, \nablala_{vx}, \nablala_{vv}$ must satisfy transpose compatibility condition
\begin{equation} \label{trans comp}
\nablala_{xv}^{T}=\nablala_{vx} \ \ \text{and} \ \ \nablala_{xx}^{T} = \nablala_{xx},
\end{equation}
since we hope the solution to be $C^{2}$. However, it is extremely hard to find any good geometric meaning or properties of some terms like
\begin{equation} \label{1st order}
\nablala_{x}(R^{i}_{x^{1}(x,v)}),\quad \nablala_{x}(A^{i}_{v, x^{1}(x,v)}),\quad \text{for}\quad i=1,2,
\end{equation}
in \eqref{Cond2 1}--\eqref{Cond2 4}. If they do not have any special structures, the only way to get compatibility \eqref{trans comp} is to impose $\nablala_{x,v}f_0(x, Rv) = 0$ for all $(x,v)\in \gamma_{-}$. Then $C^{1}$ compatibility condition \eqref{C1 cond} becomes just trivial. Fortunately, however, the matrices of \eqref{1st order} have a rank $1$ structure. {\bf More surprisingly, all the null spaces are spanned by velocity $v$!} That is, from Lemma \e^{\frac 12}f{d_RA} and Lemma \e^{\frac 12}f{dx_A},
\[
\nablala_{x}(R_{x^1(x,v)}^1)v =0, \quad \nablala_{x}(R_{x^1(x,v)}^2) v =0,\quad \nablala_x(-2A_{v,x^1(x,v)}^1)v =0, \quad \nablala_x (-2A_{v,x^1(x,v)}^2)v=0.
\]
From these interesting results, we can derive necessary conditions \eqref{C2 cond34} for transpose compatibility \eqref{trans comp}. By imposing \eqref{C2 cond34}, we derive $C^{2}$ conditions as in Theorem \e^{\frac 12}f{thm 2}, while keeping $C^{1}$ condition \eqref{C1 cond} nontrivial. We note that all the conditions that include $\partial_{t}$ are repetitions of \eqref{Cond2 1}--\eqref{Cond2 4}. \\
In the last section, we study $C^{1}$ and $C^{2}$ regularity estimates of the solution \eqref{solution}. Essentially the regularity estimates of the solution come from the regularity estimates of characteristic $(X(0;t,x,v), V(0;t,x,v))$. For $C^{1}$ of $(X(0;t,x,v), V(0;t,x,v))$, we obtain Lemma \e^{\frac 12}f{est der X,V}. Note that we can find some cancellation that gives no singular bound for $\nablala_{v}X(0;t,x,v)$ which was found in \cite{GKTT2017} for general 3D convex domains. Growth in time need not to be exponential, but it is just linear in time $t$. The second derivative of characteristic is much more complicated and nearly impossible to try to find any cancellation, because of too many terms and combinations that appear. Instead, by studying the most singular terms only, we obtain rough bounds in Lemma \e^{\frac 12}f{2nd est der X,V}.
| 1,531 | 129,481 |
en
|
train
|
0.4976.7
|
\section{Preliminaries}
Now, let us recall standard matrix notations which will be used in this paper. \\
\begin{equation}gin{definition}
When we perform matrix multiplications throughout this paper, we basically treat a n-dimensional vector $v$ as a {\it column} vector
\[
v = \begin{equation}gin{pmatrix}
v_{1} \\ \vdots \\ v_{n}
\end{pmatrix}.
\]
For about gradient of a smooth scalar function $a(x)$, however, we treat n-dimensional vector $\nablala a$ as a {\it row} vector,
\[
\nablala a(x) := (\partial_{x_{1}} a, \partial_{x_{2}} a, \cdotots, \partial_{x_{n}} a).
\]
For a smooth vector function $v : \mathbb{R}^{n}\rightarrow \mathbb{R}^{m}$ with $v(x)= \begin{equation}gin{pmatrix}
v_{1}(x) \\ \vdots \\ v_{m}(x)
\end{pmatrix}$, we define $\nablala_{x}v(x)$ as $m\times n$ matrix,
\[
\nablala_{x}v := \begin{equation}gin{pmatrix}
\partial_{1} v_{1} & \cdotots & \partial_{n} v_{1} \\
\partial_{1} v_{2} & \cdotots & \partial_{n} v_{2} \\
\vdots & \vdots & \vdots \\
\partial_{1} v_{m} & \cdotots & \partial_{n} v_{m} \\
\end{pmatrix}_{m\times n}
=
\begin{equation}gin{pmatrix}
& \nablala_{x} v_{1} &\\
&\vdots& \\
&\nablala_{x} v_{m}& \\
\end{pmatrix}_{m\times n} .
\]
We use $\otimes$ to denote tensor product
\begin{equation}gin{equation*}
a\otimes b := \begin{equation}gin{pmatrix}
a_{1} \\ \vdots \\ a_{m}
\end{pmatrix}
\begin{equation}gin{pmatrix}
b_{1} & \cdotots & b_{n}
\end{pmatrix}. \\
\end{equation*}
\end{definition}
\begin{equation}gin{lemma}\label{matrix notation}
(1) (Product rule) For scalar function $a(x)$ and vector function $v(x)$,
\[
\nablala (a(x)v(x)) = a(x)\nablala v(x) + v\otimes \nablala a(x).
\]
(2) (Chain rule) For vector functions $v(x)$ and $w(x)$,
\[
\nablala (v(w(x))) = \nablala v(w(x)) \nablala w(x).
\]
(3) (Product rule) For vector functions,
\[
\nablala(v(x)\cdotot w(x)) = v(x)\nablala w(x) + w(x)\nablala v(x).
\]
(4) For matrix $d\times d$ matrix $A(x)$ and $d\times 1$ vector $v(x)$,
\begin{equation}gin{equation} \label{d_matrix}
\begin{equation}gin{split}
\nablala_{x} (A(x)v(x))
&= A(x)\nablala v(x)
+
\begin{equation}gin{pmatrix}
v(x)\nablala A^{1}(x) \\
\vdots \\
v(x)\nablala A^{d}(x)
\end{pmatrix} \\
&= A(x)\nablala v(x)
+ \sum_{k=1}^{d}
\partial_{k}A(x) E_{k},
\end{split}
\end{equation}
where $A^{i}(x)$ is $i$-th row of $A(x)$ and $E_{k}$ is $d\times d$ matrix whose $k$th column is $v$ and others are zero. (Here $\partial_{k}A(x)$ means elementwise $x_{k}$-derivative of $A(x)$.) Moreover, if $A = A(\theta(x))$ for some smooth $\theta:\Omega\rightarrow \mathbb{R}$, \\
\begin{equation} \label{d_matrix_theta}
\nablala_{x} (A(\theta)v(x))
= A(\theta)\nablala v(x)
+ \partial_{\theta}A(\theta)v \otimes \nablala_{x}\theta.
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
Only \eqref{d_matrix_theta} needs some explanation. When $A=A(\theta(x))$,
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\nablala_{x} (A(\theta)v(x))
&= A(\theta)\nablala v(x)
+ \sum_{k=1}^{d}
\partial_{k}A(\theta) E_{k}
=
A(\theta)\nablala v(x)
+ \sum_{k=1}^{d}
\partial_{\theta}A(\theta) \partial_{k}\theta(x)E_{k} \\
&= A(\theta)\nablala v(x)
+ \partial_{\theta}A(\theta)
\begin{equation}gin{pmatrix}
& & \\
\partial_{1}\theta(x) v & \cdotots & \partial_{d}\theta(x)v \\
& & \\
\end{pmatrix} \\
&= A(\theta)\nablala v(x)
+ \partial_{\theta}A(\theta)v \otimes \nablala_{x}\theta(x).
\end{split}
\end{equation*}
\end{proof}
\begin{equation}gin{lemma} \label{nabla xv b}
We have the following computations where $x_{\mathbf{b}} = x_{\mathbf{b}}(x,v)$ and $t_{\mathbf{b}}=t_{\mathbf{b}}(x,v)$. \\
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\nablala_{x}t_{\mathbf{b}} &= \frac{n(x_{\mathbf{b}})}{v\cdotot n(x_{\mathbf{b}})} , \\
\nablala_{v}t_{\mathbf{b}} &= -t_{\mathbf{b}}\nablala_{x}t_{\mathbf{b}} = -t_{\mathbf{b}}\frac{n(x_{\mathbf{b}})}{v\cdotot n(x_{\mathbf{b}})} , \\
\nablala_{x}x_{\mathbf{b}} &= I - \frac{v\otimes n(x_{\mathbf{b}})}{v\cdotot n(x_{\mathbf{b}})}, \\
\nablala_{v}x_{\mathbf{b}} &= -t_{\mathbf{b}}\begin{equation}ig(I - \frac{v\otimes n(x_{\mathbf{b}})}{v\cdotot n(x_{\mathbf{b}})} \begin{equation}ig). \\
\end{split}
\end{equation*}
\end{lemma}
\begin{equation}gin{proof}
Remind the definition of $x_{\mathbf{b}}$ and $t_{\mathbf{b}}$
\begin{equation}gin{equation*}
x_{\mathbf{b}}=x-t_{\mathbf{b}} v, \quad t_{\mathbf{b}}=\sup\{ s \; \vert \; x-sv \in \Omega\}.
\end{equation*}
Since $\xi(x) =0$ for $x \in \partial\Omega$, we have $\xi(x_{\mathbf{b}}) = \xi(x-t_{\mathbf{b}} v)=0$. Taking the $x\mbox{-}$derivative $\nablala_x$, we get
\begin{equation}gin{align*}
\nablala_x(\xi(x_{\mathbf{b}}))&= (\nablala\xi)(x_{\mathbf{b}}) -[ (\nablala \xi)(x_{\mathbf{b}}) \cdotot v]\nablala_x t_{\mathbf{b}} \\
&= 0,
\end{align*}
where the first equality comes from product rule in Lemma \e^{\frac 12}f{matrix notation}. Thus, we can derive
\begin{equation}gin{equation*}
\nablala_x t_{\mathbf{b}} = \frac{( \nablala \xi)(x_{\mathbf{b}})}{[ (\nablala \xi)(x_{\mathbf{b}}) \cdotot v]} = \frac{ n(x_{\mathbf{b}})}{v \cdotot n(x_{\mathbf{b}}) }.
\end{equation*}
Similarly, taking the $v\mbox{-}$derivative $\nablala_v$ and product rule in Lemma \e^{\frac 12}f{matrix notation} yields
\begin{equation}gin{equation*}
\nablala_v(\xi(x_{\mathbf{b}})) = (\nablala \xi)(x_{\mathbf{b}})(-t_{\mathbf{b}} I - v \otimes \nablala_v t_{\mathbf{b}})= 0,
\end{equation*}
which implies $\nablala_v t_{\mathbf{b}} = - t_{\mathbf{b}} \frac{n(x_{\mathbf{b}})}{ v\cdotot n(x_{\mathbf{b}})}.$ It follows from the calculation of $\nablala_x t_{\mathbf{b}}$ and $\nablala_v t_{\mathbf{b}}$ above that
\begin{equation}gin{align*}
\nablala_x x_{\mathbf{b}} &= \nablala_x ( x- t_{\mathbf{b}} v) = I - v \otimes \nablala_x t_{\mathbf{b}} = I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdotot n (x_{\mathbf{b}})} \\
\nablala_v x_{\mathbf{b}} &= \nablala_v (x-t_{\mathbf{b}} v) = -t_{\mathbf{b}} I - v \otimes \nablala_v t_{\mathbf{b}} = -t_{\mathbf{b}} \left(I - \frac{ v \otimes n(x_{\mathbf{b}})}{ v \cdotot n(x_{\mathbf{b}})}\right).
\end{align*}
\end{proof}
\begin{equation}gin{lemma} \label{d_n} For $n(x_{\mathbf{b}}(x,v))$, we have the following derivative rules,
\begin{equation}gin{equation} \label{normal}
\nablala_x [n(x_{\mathbf{b}})] = I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdotot n (x_{\mathbf{b}})}, \quad \nablala_v [n(x_{\mathbf{b}})] = -t_{\mathbf{b}} \begin{equation}ig( I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdotot n (x_{\mathbf{b}})} \begin{equation}ig),
\end{equation}
where $x_{\mathbf{b}}=x_{\mathbf{b}}(x,v)$.
\end{lemma}
\begin{equation}gin{proof}
For $\nablala_x n(x_{\mathbf{b}})$, we apply the chain rule in Lemma \e^{\frac 12}f{matrix notation} to $(\nablala \xi)(x_{\mathbf{b}})$ and $\frac{1}{\vert (\nablala \xi)(x_{\mathbf{b}})\vert }$ respectively. Because $\nablala \xi (x) \neq 0 $ at the boundary $x \in \partial\Omega$ in a circle, it is possible to apply the chain rule to $\frac{1}{\vert (\nablala \xi)(x_{\mathbf{b}})\vert }$. Taking $x\mbox{-}$derivative $\nablala_x$, one obtains
\begin{equation}gin{align*}
\nablala_x [(\nablala \xi)(x_{\mathbf{b}})] &= (\nablala ^2 \xi)(x_{\mathbf{b}})\nablala_x x_{\mathbf{b}}, \\
\nablala_x \left[ \frac{1}{ \vert (\nablala \xi)(x_{\mathbf{b}}) \vert} \right] & =- \frac{(\nablala\xi)(x_{\mathbf{b}}) (\nablala^2\xi)(x_{\mathbf{b}}) \nablala_xx_{\mathbf{b}}}{\vert (\nablala \xi)(x_{\mathbf{b}}) \vert^3}.
\end{align*}
Hence,
\begin{equation}gin{align*}
\nablala_x [n(x_{\mathbf{b}})] = \nablala_x \left[ \frac{ (\nablala \xi)(x_{\mathbf{b}})}{ \vert (\nablala \xi)(x_{\mathbf{b}}) \vert } \right] &= \frac{1}{ \vert (\nablala \xi)(x_{\mathbf{b}}) \vert } \nablala_x [ (\nablala \xi)(x_{\mathbf{b}}) ] + (\nablala \xi)(x_{\mathbf{b}}) \otimes \nablala_x \left [ \frac{1}{\vert (\nablala \xi)(x_{\mathbf{b}}) \vert} \right] \\
& = \frac{1}{ \vert (\nablala \xi)(x_{\mathbf{b}}) \vert }(\nablala ^2 \xi)(x_{\mathbf{b}})\nablala_x x_{\mathbf{b}} - \nablala \xi(x_{\mathbf{b}}) \otimes \frac{(\nablala\xi)(x_{\mathbf{b}}) (\nablala^2\xi)(x_{\mathbf{b}}) \nablala_xx_{\mathbf{b}}}{\vert (\nablala \xi)(x_{\mathbf{b}}) \vert^3}\\
&= \frac{1}{ \vert (\nablala \xi)(x_{\mathbf{b}}) \vert }\begin{equation}ig( I - n(x_{\mathbf{b}}) \otimes n(x_{\mathbf{b}})\begin{equation}ig) (\nablala^2 \xi)(x_{\mathbf{b}}) \nablala_x x_{\mathbf{b}}.
\end{align*}
Since $|\nablala\xi(x_{\mathbf{b}})| =1 $ and $\nablala^{2}\xi = I_{2}$, we deduce
\begin{equation}gin{align*}
\nablala_x [n(x_{\mathbf{b}})] &= \begin{equation}ig( I - n(x_{\mathbf{b}})\otimes n(x_{\mathbf{b}}) \begin{equation}ig) \begin{equation}ig( I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdotot n (x_{\mathbf{b}})} \begin{equation}ig) \\
&= I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdotot n (x_{\mathbf{b}})} - n(x_{\mathbf{b}})\otimes n(x_{\mathbf{b}}) + n(x_{\mathbf{b}})\otimes n(x_{\mathbf{b}}) \\
&= I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdotot n (x_{\mathbf{b}})}.
\end{align*}
The case for $\nablala_v [n(x_{\mathbf{b}})]$ is nearly same with extra term $-t_{\mathbf{b}}$ which comes from Lemma \e^{\frac 12}f{nabla xv b}. \\
\end{proof}
| 3,956 | 129,481 |
en
|
train
|
0.4976.8
|
\begin{equation}gin{lemma} \label{d_n} For $n(x_{\mathbf{b}}(x,v))$, we have the following derivative rules,
\begin{equation}gin{equation} \label{normal}
\nablala_x [n(x_{\mathbf{b}})] = I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdotot n (x_{\mathbf{b}})}, \quad \nablala_v [n(x_{\mathbf{b}})] = -t_{\mathbf{b}} \begin{equation}ig( I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdotot n (x_{\mathbf{b}})} \begin{equation}ig),
\end{equation}
where $x_{\mathbf{b}}=x_{\mathbf{b}}(x,v)$.
\end{lemma}
\begin{equation}gin{proof}
For $\nablala_x n(x_{\mathbf{b}})$, we apply the chain rule in Lemma \e^{\frac 12}f{matrix notation} to $(\nablala \xi)(x_{\mathbf{b}})$ and $\frac{1}{\vert (\nablala \xi)(x_{\mathbf{b}})\vert }$ respectively. Because $\nablala \xi (x) \neq 0 $ at the boundary $x \in \partial\Omega$ in a circle, it is possible to apply the chain rule to $\frac{1}{\vert (\nablala \xi)(x_{\mathbf{b}})\vert }$. Taking $x\mbox{-}$derivative $\nablala_x$, one obtains
\begin{equation}gin{align*}
\nablala_x [(\nablala \xi)(x_{\mathbf{b}})] &= (\nablala ^2 \xi)(x_{\mathbf{b}})\nablala_x x_{\mathbf{b}}, \\
\nablala_x \left[ \frac{1}{ \vert (\nablala \xi)(x_{\mathbf{b}}) \vert} \right] & =- \frac{(\nablala\xi)(x_{\mathbf{b}}) (\nablala^2\xi)(x_{\mathbf{b}}) \nablala_xx_{\mathbf{b}}}{\vert (\nablala \xi)(x_{\mathbf{b}}) \vert^3}.
\end{align*}
Hence,
\begin{equation}gin{align*}
\nablala_x [n(x_{\mathbf{b}})] = \nablala_x \left[ \frac{ (\nablala \xi)(x_{\mathbf{b}})}{ \vert (\nablala \xi)(x_{\mathbf{b}}) \vert } \right] &= \frac{1}{ \vert (\nablala \xi)(x_{\mathbf{b}}) \vert } \nablala_x [ (\nablala \xi)(x_{\mathbf{b}}) ] + (\nablala \xi)(x_{\mathbf{b}}) \otimes \nablala_x \left [ \frac{1}{\vert (\nablala \xi)(x_{\mathbf{b}}) \vert} \right] \\
& = \frac{1}{ \vert (\nablala \xi)(x_{\mathbf{b}}) \vert }(\nablala ^2 \xi)(x_{\mathbf{b}})\nablala_x x_{\mathbf{b}} - \nablala \xi(x_{\mathbf{b}}) \otimes \frac{(\nablala\xi)(x_{\mathbf{b}}) (\nablala^2\xi)(x_{\mathbf{b}}) \nablala_xx_{\mathbf{b}}}{\vert (\nablala \xi)(x_{\mathbf{b}}) \vert^3}\\
&= \frac{1}{ \vert (\nablala \xi)(x_{\mathbf{b}}) \vert }\begin{equation}ig( I - n(x_{\mathbf{b}}) \otimes n(x_{\mathbf{b}})\begin{equation}ig) (\nablala^2 \xi)(x_{\mathbf{b}}) \nablala_x x_{\mathbf{b}}.
\end{align*}
Since $|\nablala\xi(x_{\mathbf{b}})| =1 $ and $\nablala^{2}\xi = I_{2}$, we deduce
\begin{equation}gin{align*}
\nablala_x [n(x_{\mathbf{b}})] &= \begin{equation}ig( I - n(x_{\mathbf{b}})\otimes n(x_{\mathbf{b}}) \begin{equation}ig) \begin{equation}ig( I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdotot n (x_{\mathbf{b}})} \begin{equation}ig) \\
&= I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdotot n (x_{\mathbf{b}})} - n(x_{\mathbf{b}})\otimes n(x_{\mathbf{b}}) + n(x_{\mathbf{b}})\otimes n(x_{\mathbf{b}}) \\
&= I - \frac{v \otimes n(x_{\mathbf{b}})}{v \cdotot n (x_{\mathbf{b}})}.
\end{align*}
The case for $\nablala_v [n(x_{\mathbf{b}})]$ is nearly same with extra term $-t_{\mathbf{b}}$ which comes from Lemma \e^{\frac 12}f{nabla xv b}. \\
\end{proof}
\hide
\begin{equation}gin{lemma}
For fixed $x\in\Omega$, we can classify direction $\mathbb{S}^{2}$ into three parts,
\begin{equation}gin{equation*}
\begin{equation}gin{split}
R_{0} &:= \{ \hat{r}\in \mathbb{S}^{2} : \nablala_{x}t_{\mathbf{b}}(x,v)\} \\
R_{1} &:= \{ \hat{r}\in \mathbb{S}^{2} : \} \\
R_{2} &:= \{ \hat{r}\in \mathbb{S}^{2} : \}
\end{split}
\end{equation*}
\end{lemma}
\begin{equation}gin{proof}
(i) From Proposition \eqref{nabla xv b}
\[
\frac{\partial}{\partial\varepsilon}t_{\mathbf{b}}(x+\varepsilon\hat{r}, v)\vert_{\varepsilon=0} = \nablala_{x}t_{\mathbf{b}}(x,v)\cdotot\hat{r} = \frac{\hat{r}\cdotot n(x_{\mathbf{b}}(x,v))}{v\cdotot n(x_{\mathbf{b}}(x,v))}
\]
\end{proof}
\unhide
| 1,605 | 129,481 |
en
|
train
|
0.4976.9
|
\section{Initial-boundary compatibility condition for $C^{1}_{t,x,v}$}
\begin{equation}gin{lemma} \label{lem_RA}
Recall definition \eqref{def A} of the matrix $A_{v,x}$. We have the following identities, for $(x,v) \in \{\partial\Omega \times \mathbb{R}^d\} \backslash \gamma_0$,
\begin{equation}gin{equation} \label{RA}
\begin{equation}gin{split}
R_xA_{v,x} &= \frac{1}{v\cdotot n(x)} Q(v\otimes v)Q^{T} = \frac{1}{v\cdotot n(x)} (Qv)\otimes (Qv), \\
A_{v,x} R_x &= \frac{1}{v\cdotot n(x)} R_xQ(v\otimes v)Q^{T}R_x = -\frac{1}{R_xv\cdotot n(x)} (QR_xv)\otimes (QR_xv), \\
\end{split}
\end{equation}
\begin{equation}gin{equation} \label{A2}
\begin{equation}gin{split}
A^{2}_{v,x} &= \frac{1}{(v\cdotot n(x))^{2}} (QR_xv\otimes QR_xv)(Qv\otimes Qv),
\end{split}
\end{equation}
\begin{equation}gin{equation} \label{Av=0}
A_{v,x}v =0,
\end{equation}
where $Q := Q_{\frac{\partiali}{2}}$ is counterclockwise rotation by angle $\frac{\partiali}{2}$.
\end{lemma}
\begin{equation}gin{proof}
We compute
\begin{equation}gin{equation*}
\begin{equation}gin{split}
R_xA_{v,x}R_x &:= \left[\left((v\cdotot n(x))I - (n(x) \otimes v) \right)\left(I + \frac{v\otimes n(x)}{v\cdotot n(x)}\right)\right] \\
&= \big(Qv \otimes Qn(x)\big)\left(I + \frac{v\otimes n(x)}{v\cdotot n(x)}\right).
\end{split}
\end{equation*}
Now let us define $\tau(x)= Q_{-\frac{\partiali}{2}}n(x)$ as tangential vector at $x\in\partial\Omega$. ($n$ as y-axis and $\tau$ as x-axis) Then,
\begin{equation}gin{equation*}
\begin{equation}gin{split}
R_xA_{v,x}R_x &:= Qv \otimes \begin{equation}ig( -\tau - \frac{v\cdotot\tau}{v\cdotot n(x)}n(x) \begin{equation}ig) \\
&= -\frac{1}{v\cdotot n(x)} Qv\otimes \begin{equation}ig( (v\cdotot n(x))\tau + (v\cdotot\tau)n(x) \begin{equation}ig) \\
&= -\frac{1}{v\cdotot n(x)} Qv\otimes \big( R_xQ^{T}v \big) \\
&= \frac{1}{v\cdotot n(x)} Qv\otimes \big( R_xQv \big) \\
&= \frac{1}{v\cdotot n(x)} Q(v\otimes v)Q^{T}R_x, \\
\end{split}
\end{equation*}
and we get \eqref{RA} using $R_xQ=-R_xQ^T$, because
\[
Q^{T}R_xQ = I - 2Q^{T}(n(x)\otimes n(x))Q = I - 2\tau\otimes\tau = -R_x. \\
\]
\eqref{A2} is simply obatined by \eqref{RA}. By definition of $A_{v,x}$ in \eqref{def A}, one obtains that
\begin{equation}gin{align*}
A_{v,x}v = \left[\left((v\cdotot n(x))I+(n(x) \otimes v) \right)\left(I-\frac{v\otimes n(x)}{v\cdotot n(x)}\right)\right]v=\left((v\cdotot n(x))I+(n(x)\otimes v))\right)(v-v)=0.
\end{align*}
\end{proof}
Now, throughout this section, we study $C^{1}_{t,x,v}(\mathbb{R}_{+}\times \Omega\times \mathbb{R}^{2})$ of $f(t,x,v)$ of \eqref{solution} when
\begin{equation} \label{t1 zero}
0 = t^{1}(t,x,v) \ \text{or equivalently} \ t = t_{\mathbf{b}}(x,v).
\end{equation}
| 1,296 | 129,481 |
en
|
train
|
0.4976.10
|
\subsection{$C^{1}_{v}$ condition of $f$}
Since we assume \eqref{t1 zero}, $X(0;t,x,v) = x^{1}(x,v) = x_{\mathbf{b}}(x,v) \in \partialartial \Omega$. To derive compatibility condition for $C^{1}_{v}$ of $f(t,x,v)$, we consider $v$-perturbation and use the following notation for perturbed trajectory:
\begin{equation}gin{equation} \label{XV epsilon v}
X^{\varepsilonsilon}(0) := X(0;t,x,v+\varepsilonsilon \hat{r}) , \quad V^{\varepsilonsilon}(0):=V(0;t,x,v+\varepsilonsilon \hat{r} ),
\end{equation}
where $\hat{r}\in\mathbb{R}^{2}$ is a unit-vector. As $\varepsilonsilon \rightarrow 0$, we simply get
\begin{equation}gin{equation*}
\lim_{\varepsilonsilon \rightarrow 0} X(0;t,x,v+\varepsilonsilon \hat{r}) = x^{1}(x,v) = x_{\mathbf{b}}(x,v),
\end{equation*}
from continuity of $X(0;t,x,v)$ in $v$. However, $V(0;t,x,v)$ is not continuous in $v$ because of \eqref{BC}. Explicitly, from Lemma \e^{\frac 12}f{nabla xv b},
\begin{equation} \label{R12_v}
\frac{\partial}{\partial\varepsilon}t_{\mathbf{b}}(x, v+\varepsilon\hat{r})\vert_{\varepsilon=0} = \nablala_{v}t_{\mathbf{b}}(x,v)\cdotot\hat{r} = -t_{\mathbf{b}}\frac{\hat{r}\cdotot n(x_{\mathbf{b}}(x,v))}{v\cdotot n(x_{\mathbf{b}}(x,v))},\quad \text{where}\quad v\cdotot n(x_{\mathbf{b}}(x,v)) < 0.
\end{equation}
So we define, for fixed $(x,v)$, $v\neq 0$,
\begin{equation}gin{equation} \label{set R_vel}
\begin{equation}gin{split}
R_{vel, 1} &:= \{ \hat{r}\in \mathbb{S}^{2} : \hat{r}\cdotot n(x_{\mathbf{b}}(x,v)) < 0 \}, \\
R_{vel, 2} &:= \{ \hat{r}\in \mathbb{S}^{2} : \hat{r}\cdotot n(x_{\mathbf{b}}(x,v)) \geq 0 \}. \\
\end{split}
\end{equation}
Then from \eqref{R12_v}, $\nablala_{v}t_{\mathbf{b}}(x,v)\cdotot\hat{r} > 0$ when $\hat{r}\in R_{vel, 1}$ and $\nablala_{v}t_{\mathbf{b}}(x,v)\cdotot\hat{r} \leq 0$ when $\hat{r}\in R_{vel, 2}$. Therefore, for two unit vectors $\hat{r}_1\in R_{vel, 1}$ and $\hat{r}_2\in R_{vel, 2}$, by continuity argument,
\begin{equation}gin{equation*}
\lim_{\varepsilonsilon \rightarrow 0+} V(0;t,x,v+\varepsilonsilon \hat{r}_1) = v, \quad \lim_{\varepsilonsilon \rightarrow 0+} V (0;t,x,v+\varepsilonsilon \hat{r}_2) = v^1=R_{x^{1}}v. \\
\end{equation*}
We consider directional derivatives with respect to $\hat{r}_1$ and $\hat{r}_2$. If $f$ belongs to the $C^1_v$ class, $\nablala_v f(t,x,v)$ exists and directional derivatives of $f$ with respect to $\hat{r}_1,\hat{r}_2$ will be $\nablala_v f(t,x,v) \hat{r}_1,\;\nablala_v f(t,x,v) \hat{r}_2$. Using \eqref{BC}, we have $f_{0}(x^{1}, v) = f_{0}(x^{1}, v^{1})$ and hence
\begin{equation}gin{align*}
\nablala_v f(t,x,v) \hat{r}_1 &= \lim _{\varepsilonsilon\rightarrow 0+} \frac{1}{\varepsilonsilon}\left ( f(t,x,v+\varepsilonsilon \hat{r}_1) - f(t,x,v) \right )\\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon}\left( f_0(X(0;t,x,v+\varepsilonsilon \hat{r}_1),V(0;t,x,v+\varepsilonsilon \hat{r}_1)) - f_0(X(0;t,x,v),V(0;t,x,v)) \right)\\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( f_0(X^{\varepsilonsilon}(0), V^{\varepsilonsilon}(0))- f_0 (X^{\varepsilonsilon}(0),v)+f_0(X^{\varepsilonsilon}(0),v) -f_0(X(0),v) \right) \\
&=\nablala_x f_0(X(0),v) \cdotot \lim_{s\rightarrow 0+} \nablala_v X(s) \hat{r}_1+ \nablala_v f_0(X(0),v) \lim_{s \rightarrow 0+} \nablala_v V(s)\hat{r}_1, \\
\nablala_v f(t,x,v) \hat{r}_2 &= \lim _{\varepsilonsilon\rightarrow 0+} \frac{1}{\varepsilonsilon}\left ( f(t,x,v+\varepsilonsilon \hat{r}_2) - f(t,x,v) \right )\\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon}\left( f_0(X(0;t,x,v+\varepsilonsilon \hat{r}_2),V(0;t,x,v+\varepsilonsilon \hat{r}_2)) - f_0(X(0;t,x,v),V(0;t,x,v)) \right)\\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( f_0(X^{\varepsilonsilon}(0), V^{\varepsilonsilon}(0))- f_0 (X^{\varepsilonsilon}(0),v^{1})+f_0(X^{\varepsilonsilon}(0),v^{1}) -f_0(X(0),v^{1}) \right) \\
&=\nablala_x f_0(X(0),v^{1}) \cdotot \lim_{s\rightarrow 0-} \nablala_v X(s) \hat{r}_2+ \nablala_v f_0(X(0),v^{1}) \lim_{s \rightarrow 0-} \nablala_v V(s)\hat{r}_2,
\end{align*}
which implies
\begin{equation}gin{eqnarray}
&& \nablala_v f(t,x,v) =\nablala_x f_0(X(0),v) \lim_{s\rightarrow 0+} \nablala_v X(s)+ \nablala_v f_0(X(0),v) \lim_{s \rightarrow 0+} \nablala_v V(s), \label{case12 r1}\\
&& \nablala_v f(t,x,v) =\nablala_x f_0(X(0),v^{1}) \lim_{s\rightarrow 0-} \nablala_v X(s)+ \nablala_v f_0(X(0),v^{1}) \lim_{s \rightarrow 0-} \nablala_v V(s). \label{case12 r2}
\end{eqnarray}
\noindent Since
\begin{equation} \label{nabla XV_v+}
\lim_{s\rightarrow 0+} \nablala_v X(s)= \lim_{s\rightarrow 0+}\nablala_{v}(x-v(t-s)) = -t I_{2\times 2}, \quad \lim_{s \rightarrow 0+} \nablala_v V(s) = \lim_{s \rightarrow 0+} \nablala_v v = I_{2\times 2},
\end{equation}
$\nablala_v f(t,x,v)$ of \eqref{case12 r1} becomes
\begin{equation}gin{equation} \label{c_1}
\nablala_v f(t,x,v) = -t \nablala_x f_0(X(0),v) + \nablala_v f_0(X(0),v). \\
\end{equation}
For \eqref{case12 r2}, using the product rule in Lemma \e^{\frac 12}f{matrix notation} and \eqref{normal} in Lemma \e^{\frac 12}f{d_n}, we have
\begin{equation}gin{equation} \label{nabla XV_v-}
\begin{equation}gin{split}
\lim_{s\rightarrow 0-} \nablala_v X(s)& = \lim_{s\rightarrow 0-} \nablala_v (x^1 - (t^1+s)v^1)= \lim_{s\rightarrow 0-} \nablala_v x^1 + v^{1}\otimes\nablala_{v}t_{\mathbf{b}} \\
&= -t\left(I-\frac{v\otimes n(x^1)}{v\cdotot n(x^1)}\right) - t\frac{v^1 \otimes n(x^1)}{ v \cdotot n(x^1)} \\
&= -t \begin{equation}ig( I -\frac{1}{v\cdotot n(x^1)} \big( 2(v\cdotot n(x^{1}))n(x^{1}) \big)\otimes n(x^{1}) \begin{equation}ig) = -tR_{x^{1}}, \\
\lim_{s \rightarrow 0-} \nablala_v V(s) &= \lim_{s \rightarrow 0-} \nablala_v (R_{x^{1}}v) \\
&=\lim_{s \rightarrow 0-} \left( I- 2(v \cdotot n(x^1) ) \nablala_v n(x^1) -2 n(x^1)\otimes n(x^1) -2 (n(x^1)\otimes v ) \nablala_v n(x^1)\right)\\
&=R_{x^{1}} +2t(v\cdotot n(x^1)) \left( I - \frac{ v \otimes n(x^1)}{ v \cdotot n(x^1)} \right) +2t (n(x^1) \otimes v)\left( I - \frac{ v \otimes n(x^1)}{ v \cdotot n(x^1)} \right) \\
&= R_{x^{1}} + 2t A_{v, x^{1}},
\end{split}
\end{equation}
where $A_{v, x^{1}}$ is defined in \eqref{def A}. Hence, using \eqref{nabla XV_v-}, $\nablala_v f(t,x,v)$ in \eqref{case12 r2} becomes
\begin{equation}gin{align} \label{c_2}
\begin{equation}gin{split}
\nablala_v f(t,x,v) &= -t \nablala_x f_0(X(0),R_{x^{1}}v) R_{x^{1}} \\
&\quad +\nablala_v f_0(X(0),R_{x^{1}}v)R_{x^{1}} + t\nablala_v f_0(X(0),R_{x^{1}}v)\left[2\left((v\cdotot n(x^1))I+(n(x^1) \otimes v) \right)\left(I-\frac{v\otimes n(x^1)}{v\cdotot n(x^1)}\right)\right] \\
&= -t \nablala_x f_0(x^{1},R_{x^1}v) R_{x^{1}} + \nablala_v f_0(x^{1},R_{x^{1}}v) (R_{x^{1}} + 2tA_{v, x^{1}}),
\end{split}
\end{align}
where we used $v\cdotot n (x^1) = - v^1 \cdotot n(x^1)$. Meanwhile, taking $\nablala_{v}$ to specular reflection \eqref{BC} directly, we get
\begin{equation} \label{comp_v}
\nablala_{v}f_{0}(x,v) =\nablala_{v}f_{0}(x,R_{x}v)R_{x}, \quad \forall x\in\partial\Omega.
\end{equation}
Comparing \eqref{c_1}, \eqref{c_2}, and \eqref{comp_v}, we deduce
\begin{equation}gin{align} \label{c_v}
\begin{equation}gin{split}
\nablala_x f_0( x^{1},v) &= \nablala_x f_0(x^{1},R_{x^1}v) R_{x^{1}} - 2\nablala_v f_0(x^{1}, R_{x^{1}}v)A_{v,x^{1}},\quad (x^{1}, v)\in \gamma_{-}.
\end{split}
\end{align}
| 3,583 | 129,481 |
en
|
train
|
0.4976.11
|
\subsection{$C^{1}_{x}$ condition of $f$}
Recall we assumed \eqref{t1 zero}. Similar to previous subsection, we define $x$-perturbed trajectory,
\begin{equation}gin{equation} \label{XV epsilon x}
X^{\varepsilonsilon}(0) :=X(0;t,x+\varepsilonsilon \hat{r}, v ), \quad V^{\varepsilonsilon}(0) := V(0;t,x+\varepsilonsilon \hat{r}, v),
\end{equation}
where $\hat{r}\in\mathbb{R}^{2}$ is a unit-vector. As $\varepsilonsilon \rightarrow 0$, we simply get
\begin{equation}gin{equation*}
\lim_{\varepsilonsilon \rightarrow 0} X(0;t,x+\varepsilonsilon\hat{r},v) = x^{1}(x,v).
\end{equation*}
Similar to previous subsection, using Lemma \e^{\frac 12}f{nabla xv b},
\begin{equation} \label{R12_x}
\frac{\partial}{\partial\varepsilon}t_{\mathbf{b}}(x+\varepsilon\hat{r}, v)\big\vert_{\varepsilon=0} = \nablala_{x}t_{\mathbf{b}}(x,v)\cdotot\hat{r} = \frac{\hat{r}\cdotot n(x_{\mathbf{b}}(x,v))}{v\cdotot n(x_{\mathbf{b}}(x,v))},\quad \text{where}\quad v\cdotot n(x_{\mathbf{b}}(x,v)) < 0.
\end{equation}
So we define, for fixed $(x,v)$, $v\neq 0$,
\begin{equation}gin{equation} \label{set R_sp}
\begin{equation}gin{split}
R_{sp, 1} &:= \{ \hat{r}\in \mathbb{S}^{2} : \hat{r}\cdotot n(x_{\mathbf{b}}(x,v)) > 0 \}, \\
R_{sp, 2} &:= \{ \hat{r}\in \mathbb{S}^{2} : \hat{r}\cdotot n(x_{\mathbf{b}}(x,v)) \leq 0 \}. \\
\end{split}
\end{equation}
Then from \eqref{R12_x}, $\nablala_{x}t_{\mathbf{b}}(x,v)\cdotot\hat{r} > 0$ when $\hat{r}\in R_{sp, 1}$ and $\nablala_{x}t_{\mathbf{b}}(x,v)\cdotot\hat{r} \leq 0$ when $\hat{r}\in R_{sp, 2}$. Therefore, for two unit vectors $\hat{r}_1\in R_{sp, 1}$ and $\hat{r}_2\in R_{sp, 2}$, by continuity argument,
\begin{equation}gin{equation*}
\lim_{\varepsilonsilon \rightarrow 0+} V(0;t,x,v+\varepsilonsilon \hat{r}_1) = v, \quad \lim_{\varepsilonsilon \rightarrow 0+} V (0;t,x,v+\varepsilonsilon \hat{r}_2) = v^1=R_{x^{1}}v.
\end{equation*}
Using similar arguments in previous subsection, we obtain
\begin{equation}gin{eqnarray}
&& \nablala_x f(t,x,v) \hat{r}_{1} =\nablala_x f_0(X(0),v) \lim_{s\rightarrow 0+} \nablala_x X(s)\hat{r}_{1} + \nablala_v f_0(X(0),v) \lim_{s \rightarrow 0+} \nablala_x V(s)\hat{r}_{1}, \label{case12 r1 x}\\
&& \nablala_x f(t,x,v)\hat{r}_{2} =\nablala_x f_0(X(0),Rv) \lim_{s\rightarrow 0-} \nablala_x X(s) \hat{r}_{2} + \nablala_v f_0(X(0),Rv) \lim_{s \rightarrow 0-} \nablala_x V(s) \hat{r}_{2}. \label{case12 r2 x}
\end{eqnarray}
Since
\begin{equation} \label{nabla XV_x+}
\lim_{s\rightarrow 0+} \nablala_x X(s) = I_{2\times 2},\quad \lim _{s \rightarrow 0+} \nablala_x V(s)=0_{2 \times 2},
\end{equation}
$\nablala_{x}f(t,x,v)$ of \eqref{case12 r1 x} becomes
\begin{equation}gin{equation} \label{c_3}
\nablala_x f(t,x,v) = \nablala_x f_0(X(0),v).
\end{equation}
For $\nablala_{x}f(t,x,v)$ of \eqref{case12 r2 x}, we apply the product rule in Lemma \e^{\frac 12}f{matrix notation} and \eqref{normal} in Lemma \e^{\frac 12}f{d_n} to get
\begin{equation}gin{equation} \label{nabla XV_x-}
\begin{equation}gin{split}
\lim_{s\rightarrow 0-} \nablala_x X(s)& = \lim_{s\rightarrow 0-} \nablala_x (x^1 - (t^1+s)v^1)=\left(I-\frac{v\otimes n(x^1)}{v\cdotot n(x^1)}\right) + \frac{v^1 \otimes n(x^1)}{ v \cdotot n(x^1)} = R_{x^{1}},\\
\lim_{s \rightarrow 0-} \nablala_x V(s) &= \lim_{s \rightarrow 0-} \nablala_x (R_{x^{1}}v) \\
&=\lim_{s \rightarrow 0-} \left( - 2(v \cdotot n(x^1) ) \nablala_x n(x^1) -2 (n(x^1)\otimes v ) \nablala_x n(x^1)\right)\\
&=-2(v\cdotot n(x^1)) \left( I - \frac{ v \otimes n(x^1)}{ v \cdotot n(x^1)} \right) -2 (n(x^1) \otimes v)\left( I - \frac{ v \otimes n(x^1)}{ v \cdotot n(x^1)} \right) \\
&= -2 A_{v,x^{1}}.
\end{split}
\end{equation}
Hence, using \eqref{nabla XV_x-}, $\nablala_x f(t,x,v)$ in \eqref{case12 r2 x} becomes
\begin{equation}gin{align} \label{c_4}
\begin{equation}gin{split}
\nablala_x f(t,x,v) &= \nablala_x f_0(X(0), R_{x^{1}}v) R_{x^{1}} - 2\nablala_v f_0(X(0),R_{x^{1}}v) A_{v,x^{1}}.
\end{split}
\end{align}
Combining \eqref{c_3} and \eqref{c_4},
\begin{equation}gin{align}\label{c_x}
\begin{equation}gin{split}
\nablala_x f_0( x^{1},v) &= \nablala_x f_0(x^{1},R_{x^{1}}v) R_{x^{1}} - 2\nablala_v f_0(x^{1}, R_{x^{1}}v) A_{v,x^{1}},\quad (x^{1}, v)\in \gamma_{-},
\end{split}
\end{align}
which is identical to \eqref{c_v}. \\
\hide
which exactly coincides with \eqref{c_v}. We rewrite compatibility condition as
\begin{equation}
\begin{equation}gin{split}
\nablala_x f_0( x^{1},v) &= \nablala_x f_0(x^{1},R_{x^{1}}v) R_{x^{1}}
- \nablala_v f_0(x^{1}, R_{x^{1}}v)\left[2\left((v\cdotot n(x^1))I+(n(x^1) \otimes v) \right)\left(I-\frac{v\otimes n(x^1)}{v\cdotot n(x^1)}\right)\right],\quad (x^{1},v) \in \gamma_{-}.
\end{split}
\end{equation}
\unhide
\subsection{$C^{1}_{t}$ condition of $f$}
To check the $C^1_t$ condition, we define
\begin{equation}gin{align}\label{Perb_t}
X^\varepsilonsilon(0):=X(0;t+\varepsilonsilon, x,v), \quad V^\varepsilonsilon(0):= V(0;t+\varepsilonsilon,x,v).
\end{align}
More specifically,
\begin{equation}gin{align*}
X^\varepsilonsilon(0)=x^1-(t^1+\varepsilonsilon) R_{x^{1}}v, \quad V^\varepsilonsilon(0)= R_{x^{1}}v,\quad \varepsilonsilon > 0,
\end{align*}
and
\begin{equation}gin{align*}
X^\varepsilonsilon(0)=x-(t+\varepsilonsilon)v,\quad V^\varepsilonsilon(0)=v,\quad \varepsilonsilon < 0.
\end{align*}
Thus, the case ($\varepsilonsilon>0$) describes the situation after bounce (backward in time) and the case ($\varepsilonsilon<0$) describes the situation just before bounce (backward in time). Then, for $\varepsilonsilon>0$,
\begin{equation}gin{align*}
f_t(t,x,v)&= \lim_{\varepsilonsilon\rightarrow0+}\frac{f(t+\varepsilonsilon,x,v)-f(t,x,v)}{\varepsilonsilon} \\
&=\lim_{\varepsilonsilon\rightarrow 0+} \frac{f_0(X^\varepsilonsilon(0),V^\varepsilonsilon(0))-f_0(X(0),V(0))}{\varepsilonsilon}\\
&=\lim_{\varepsilonsilon\rightarrow 0+} \frac{f_0(X^\varepsilonsilon(0),R_{x^{1}}v)-f_0(X(0),R_{x^{1}}v)}{\varepsilonsilon}\\
&=\nablala_x f_0(x^1,R_{x^{1}}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{X^\varepsilonsilon(0)-X(0)}{\varepsilonsilon}\\
&=-\nablala_x f_0(x^1,R_{x^{1}}v)R_{x^{1}}v.
\end{align*}
We only consider the situation just before collision and then
\begin{equation}gin{align*}
f_t(t,x,v)&= \lim_{\varepsilonsilon\rightarrow0-}\frac{f(t+\varepsilonsilon,x,v)-f(t,x,v)}{\varepsilonsilon} \\
&= \lim_{\varepsilonsilon\rightarrow0-}\frac{f_0(X^\varepsilonsilon(0),v)-f_0(X(0),v)}{\varepsilonsilon}\\
&= \nablala_xf_0(x^1,v) \lim_{\varepsilonsilon\rightarrow0-} \frac{X^\varepsilonsilon(0)-X(0)}{\varepsilonsilon}\\
&=-\nablala_x f_0(x^1,v)v.
\end{align*}
Thus, we derive a $C^1_t$ condition
\begin{equation}gin{equation} \label{c_t}
\nablala_x f_0(x^1,v)v = \nablala_x f_0(x^1,R_{x^{1}}v)R_{x^{1}}v,\quad (x^{1}, v)\in \gamma_{-}.
\end{equation}
Actually, \eqref{c_t} is just particular case of \eqref{c_v}, because of \eqref{Av=0}.
\hide
{\color{blue}
\begin{equation}gin{remark} \label{trivial case}
Let us consider trivial case : $f(t,x,v) = f_{0}(v)$, spatially independent case. Since specular reflection holds for all $x\in \partial\Omega$, $f_{0}$ should be radial function, $f_{0}(v) = f_{0}(|v|)$. \eqref{Cond} also holds for this case, because vector $\nablala_{v}f_{0}(x^{1},Rv)$ has $Rv$ direction and
\begin{equation}
\begin{equation}gin{split}
&\underbrace{(Rv)}_{\text{row vector}}\left((v\cdotot n(x^1))I+(n(x^1) \otimes v) \right)\left(I-\frac{v\otimes n(x^1)}{v\cdotot n(x^1)}\right) \\
&= \left((v\cdotot n(x^1)) (Rv) + (Rv\cdotot n(x^1)) v \right)\left(I-\frac{v\otimes n(x^1)}{v\cdotot n(x^1)}\right) \\
&= (v\cdotot n(x^1)) Rv - (v\cdotot n(x^{1}))v - (Rv\cdotot v)n(x^{1}) + |v|^{2}n(x^{1}) \\
&= (v\cdotot n(x^1)) \big(v - 2(v\cdotot n(x^{1}))n(x^{1}) \big)- (v\cdotot n(x^{1}))v - \big(|v|^{2} - 2|v\cdotot n(x^{1})|^{2}\big) n(x^{1}) + |v|^{2}n(x^{1}) \\
&= 0.
\end{split}
\end{equation}
\end{remark}
}
\unhide
| 3,715 | 129,481 |
en
|
train
|
0.4976.12
|
\subsection{Proof of Theorem \e^{\frac 12}f{thm 1}}
\begin{equation}gin{proof} [Proof of Theorem \e^{\frac 12}f{thm 1}]
If $0 \neq t^{k}$ for any $k\in \mathbb{N}$, then $X(0;t,x,v)$ and $V(0;t,x,v)$ are both smooth function of $(t,x,v)$. By chain rule and $f_0\in C^{1}_{x,v}$, $f(t,x,v)$ of \eqref{solution} is also $C^{1}_{t,x,v}$. \\
Now let us assume $ 0 = t^{k}(t,x,v)$ for some $k\in \mathbb{N}$. From discontinuous property of $V(0;t,x,v)$, we consider the following two cases:
\begin{equation}gin{align*}
& \quad \lim_{s\rightarrow 0+} \nablala_v V(s) \textcolor{blue}{ ( \text{or} \ \nablala_vX(s))} = \underbrace{ \lim_{s\rightarrow 0+} \frac{\partialartial V(s)\textcolor{blue}{( \text{or} \ \partialartial X(s))}}{\partialartial(t^{k-1},x^{k-1}, v^{k-1})} } \frac{\partialartial(t^{k-1},x^{k-1}, v^{k-1})}{\partialartial v},\\
& \quad \lim_{s\rightarrow 0-} \nablala_v V(s) \textcolor{blue}{( \text{or} \ \nablala_vX(s))}= \underbrace{ \lim_{s\rightarrow 0-} \frac{\partialartial V(s)\textcolor{blue}{( \text{or} \ \partialartial X(s))}}{\partialartial(t^{k},x^{k}, v^{k})}\frac{\partialartial(t^{k},x^k,v^k)}{\partialartial(t^{k-1},x^{k-1},v^{k-1})} } \frac{\partialartial(t^{k-1},x^{k-1}, v^{k-1})}{\partialartial v}.
\end{align*}
First, we note that the factor $\mathrm{d}isplaystyle \frac{\partialartial(t^{k-1},x^{k-1}, v^{k-1})}{\partialartial v}$ which is common for both of above is smooth. From Lemma \e^{\frac 12}f{nabla xv b}, $t^{1}(t,x,v) = t - t_{\mathbf{b}}(x,v)$, $x^{1}(x,v) = x - t_{\mathbf{b}}(x,v) v$, and $v^{1}(x,v) = R_{x_{\mathbf{b}}(x,v)}v$ are all smooth functions of $(x,v)$ if $(t^{1}, x^{1}, v^{1})$ is nongrazing at $x^{1}$. Now, let us consider the mapping
\[
(t^{1}, x^{1}, v^{1}) \mapsto (t^{2}, x^{2}, v^{2})
\]
which is smooth by
\[
t^{2} = t^{1} - t_{\mathbf{b}}(x^{1}, v^{1}),\quad x^{2} = x^{1} - v^{1}t_{\mathbf{b}}(x^{1}, v^{1}),\quad v^{2} = R_{x^{2}}v^{1}.
\]
(Note that the derivative of $t_{\mathbf{b}}$ on $\partial\Omega \times \mathbb{R}^{3}_{v}$ can be performed by its local parametrization.)
By the chain rule, we easily derive that $(t^{k}, x^{k}, v^{k})$ is smooth in $(x, v)$. For explicit computation and their Jacobian, we refer to \cite{KimLee}.
Now, it suffices to compare above two underbraced terms only. {\bf It means that no generality is lost by setting $k=1$.} \\
Initial-boundary compatibility conditions for $C^{1}_{t,x,v}$ were obtained in \eqref{c_v}, \eqref{c_x}, and \eqref{c_t}. Since compatibility conditions \eqref{c_x} and \eqref{c_t} are covered by \eqref{c_v}, $f(t,x,v)\in C^{1}_{t,x,v}$ once \eqref{c_v} holds. To change \eqref{c_v} into more symmetric presentation \eqref{C1 cond}, we apply \eqref{comp_v} and multiply invertible matrix $R_{x^{1}}$ on both sides from the right to obtain
\begin{equation}gin{equation*}
\begin{equation}gin{split}
&\big( \nablala_x f_0(x^{1},v) + \nablala_v f_0(x^{1}, v) R_{x^1}A_{v,x^{1}} \big) R_{x^1} = \nablala_x f_0(x^1, R_{x^1}v)
- \nablala_v f_0(x^1, R_{x^1}v) A_{v,x^{1}}R_{x^1}. \\
\end{split}
\end{equation*}
This yields
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\begin{equation}ig[ \nablala_x f_0( x,v) + \nablala_v f_0(x, v) \frac{ (Qv)\otimes (Qv) }{v\cdotot n(x)} \begin{equation}ig]R_x
&=
\nablala_x f_0(x, R_xv)
+ \nablala_v f_0(x, R_xv) \frac{ (QR_xv)\otimes (QR_xv) }{R_xv\cdotot n(x)},
\end{split}
\end{equation*}
by \eqref{RA}. \\
Now we claim that compatibility condition \eqref{c_v} also holds for $(x^{1}, v)\in \gamma_{+}$.
By multiplying $R_{x^{1}}$ both sides and using $R^2_{x^1} = I$, $R_{x^1}n(x^1) = -n(x^1)$, and \eqref{comp_v}, we obtain
\begin{equation} \label{C1 gamma+}
\begin{equation}gin{split}
\nablala_x f_0( x^{1}, R_{x^1}v) &= \nablala_x f_0(x^{1}, v) R_{x^{1}}
+ 2 \nablala_v f_0(x^{1}, v) \underbrace{ R_{x^1}\left[\left((v\cdotot n(x^1))I+(n(x^1) \otimes v) \right)\left(I-\frac{v\otimes n(x^1)}{v\cdotot n(x^1)}\right)\right] R_{x^1} }.
\end{split}
\end{equation}
Since $R_{x^1}=R^{T}_{x^1}$ (transpose), the underbraced term is written as
\begin{equation}
\begin{equation}gin{split}
&R_{x^{1}} A_{v,x^{1}} R_{x^{1}} \\
&= R_{x^{1}} \left[\left((v\cdotot n(x^1))I+(n(x^1) \otimes v) \right)\left(I-\frac{v\otimes n(x^1)}{v\cdotot n(x^1)}\right)\right] R_{x^{1}} \\
&= -(R_{x^1}v\cdotot n(x^{1}))I - R_{x^1}v\otimes R_{x^1}n(x^{1}) + R_{x^1}n(x^{1})\otimes R_{x^1}v - \frac{R_{x^1}n(x^{1})\otimes R_{x^1}n(x^{1})}{v\cdotot n(x^{1})}|R_{x^1}v|^{2} \\
&= - \left[\left((R_{x^1}v\cdotot n(x^1))I+(n(x^1) \otimes R_{x^1}v) \right)\left(I-\frac{R_{x^1}v\otimes n(x^1)}{R_{x^1}v\cdotot n(x^1)}\right)\right] \\
&= -A_{R_{x^1}v,x^{1}},
\end{split}
\end{equation}
and hence \eqref{C1 gamma+} is identical to \eqref{c_v} when $(x^{1},v)\in \gamma_{+}$.
Finally, we will prove that $f(t,x,v)$ is not of class $C^1_{t,x,v}$ at time $t$ such that $t^k(t,x,v)=0$ for some $k$ if \eqref{C1 cond} does not hold. As we used the chain rule, we set $t^1(t,x,v)=0$. Thus, it suffices to prove that $f(t,x,v)$ is not of class $C^1_{t,x,v}$ at time $t$ which satisfies $t^1(t,x,v)=0$ if \eqref{C1 cond} is not satisfied for $(X(0;t,x,v),v)\in \gamma_-$. Remind directional derivatives with respect to $\hat{r}_1$ and $\hat{r}_2$ to get $f \in C^1_{t,x,v}(\mathbb{R}_+\times \mathcal{I})$. In $C^1_v$ case, we deduced two conditions \eqref{case12 r1} and \eqref{case12 r2} from directional derivatives. However, if initial data $f_0$ does not satisfy the condition \eqref{C1 cond} at $(X(0;t,x,v),v)\in \gamma_-$, two conditions cannot coincide. It means that $f(t,x,v)$ is not $C^1_v$ at $t$ such that $t^1(t,x,v)=0$. Similar to $C^1_{t,x}$ cases, we get the same result.
\end{proof}
| 2,637 | 129,481 |
en
|
train
|
0.4976.13
|
\section{Initial-boundary compatibility condition for $C^{2}_{t,x,v}$}
As mentioned in the beginning of the previous section, we treat the problem \eqref{eq} as 2D problem in a unit disk $\{x\in\mathbb{R}^{2} : |x| < 1 \}$.
And, throughout this section, we use the following notation to interchange column and row for notational convenience,
\[
\begin{equation}gin{pmatrix}
a \\ b
\end{pmatrix}
\stackrel{c\leftrightarrow r}{=}
\begin{equation}gin{pmatrix}
a & b
\end{pmatrix}
,\quad
\begin{equation}gin{pmatrix}
a & b
\end{pmatrix}
\stackrel{r\leftrightarrow c}{=}
\begin{equation}gin{pmatrix}
a \\ b
\end{pmatrix}. \\
\]
Similar to previous section, we assume \eqref{t1 zero}, i.e., $0=t^{1}(t,x,v)$. We also assume $f_0$ satisfies specular reflection \eqref{BC} and $C^{1}_{t,x,v}$ compatibility condition \eqref{c_v} (or \eqref{C1 cond}) in this section. \\
| 321 | 129,481 |
en
|
train
|
0.4976.14
|
\subsection{Condition for $\nablala_{xv}$}
Similar to previous section, we split perturbed direction into \eqref{set R_sp}. We also note that $\nablala_{v}f(t,x,v)$ can be written as \eqref{c_1} or \eqref{c_2}, which are identical by assuming \eqref{c_v}. First, using \eqref{c_1}, $\hat{r}_{1}$ of \eqref{set R_sp}, and notation \eqref{XV epsilon x}
\begin{equation}gin{equation} \label{nabla_xv f case1}
\begin{equation}gin{split}
&\nablala_{xv} f(t,x,v) \hat{r}_1 \stackrel{c\leftrightarrow r}{=} \lim _{\varepsilonsilon\rightarrow 0+} \frac{1}{\varepsilonsilon}\left ( \nablala_{v}f(t,x+\varepsilonsilon \hat{r}_1,v) - \nablala_{v}f(t,x,v) \right ) \\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon}\begin{equation}ig( \nablala_{v}\big[ f_0(X(0;t,x+\varepsilonsilon \hat{r}_1,v),V(0;t,x+\varepsilonsilon \hat{r}_1,v)) \big] - \big( -t \nablala_x f_0(X(0),v) + \nablala_v f_0(X(0),v) \big)\begin{equation}ig) \\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{v}X^{\varepsilon}(0) + \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{v}V^{\varepsilon}(0) \\
&\quad\quad\quad\quad - \big( -t \nablala_x f_0(X(0),v) + \nablala_v f_0(X(0),v) \big) \begin{equation}ig\} \\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ -t\big[ \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) - \nablala_{x}f_{0}(X(0), v) \big]
+
\big[ \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) - \nablala_{v}f_{0}(X(0), v) \big] \begin{equation}ig\} \\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ -t\big[ \nablala_{x}f_{0}(X^{\varepsilon}(0), v) - \nablala_{x}f_{0}(X(0), v) \big]
+
\big[ \nablala_{v}f_{0}(X^{\varepsilon}(0), v) - \nablala_{v}f_{0}(X(0), v) \big] \begin{equation}ig\} \\
&\stackrel{r\leftrightarrow c}{=} \nablala_{xx}f_{0}(X(0),v)\lim_{s\rightarrow 0+} \nablala_{x}X(s) (-t )\hat{r}_1 + \nablala_{xv}f_{0}(X(0),v)\lim_{s\rightarrow 0+}\nablala_{x}X(s) \hat{r}_1 \\
&= \begin{equation}ig( \nablala_{xx}f_{0}(x^{1},v) (-t ) + \nablala_{xv}f_{0}(x^{1},v) \begin{equation}ig) \hat{r}_1,
\end{split}
\end{equation}
where we have used \eqref{nabla XV_v+}, \eqref{nabla XV_x+}, $\nablala_{v}X^{\varepsilon}(0)=-t I_{2}$., and $\nablala_{v}V^{\varepsilon}(0)= I_{2}$. Similarly, using \eqref{c_2} and $\hat{r}_{2}$ of \eqref{set R_sp},
\begin{equation}gin{equation} \notag
\begin{equation}gin{split}
&\nablala_{xv} f(t,x,v) \hat{r}_2 \stackrel{c\leftrightarrow r}{=} \lim _{\varepsilonsilon\rightarrow 0+} \frac{1}{\varepsilonsilon}\left ( \nablala_{v}f(t,x+\varepsilonsilon \hat{r}_2,v) - \nablala_{v}f(t,x,v) \right )\\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{v}X^{\varepsilon}(0) + \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{v}V^{\varepsilon}(0) \\
&\quad\quad\quad - \big( -t \nablala_x f_0(X(0),R_{x^1}v) R_{x^{1}} + \nablala_v f_0(X(0),R_{x^1}v) (R_{x^1} + 2tA_{v, x^{1}})\big) \begin{equation}ig\} \\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{\nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{v}X^{\varepsilon}(0) + t \nablala_x f_0(X(0),R_{x^1}v) R_{x^{1}} \begin{equation}ig\} \\
&\quad + \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{v}V^{\varepsilon}(0) - \nablala_v f_0(X(0),R_{x^1}v) (R_{x^1} + 2tA_{v, x^{1}}) \begin{equation}ig\} \\
&:= I_{xv,1} + I_{xv,2} .
\end{split}
\end{equation}
Using \eqref{nabla XV_v-} and \eqref{nabla XV_x-},
\begin{equation}gin{equation*}
\begin{equation}gin{split}
I_{xv,1} &:= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{\nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{v}X^{\varepsilon}(0) - \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s\rightarrow 0-}\nablala_{v}X(s) \\
&\quad\quad \quad\quad\quad + \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s\rightarrow 0-} \nablala_{v}X(s) + t \nablala_x f_0(X(0),R_{x^1}v) R_{x^{1}} \begin{equation}ig\} ,\quad\quad \lim_{s\rightarrow 0-} \nablala_{v}X(s) = -tR_{x^{1}}, \\
&= \nablala_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}X^{\varepsilon}(0) - \lim_{s\rightarrow 0-} \nablala_{v}X(s) \begin{equation}ig)
+ \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon}\begin{equation}ig( \nablala_{x}f_{0}(X^{\varepsilon}(0),V^{\varepsilon}(0)) - \nablala_{x}f_{0}(X(0), R_{x^1}v) \begin{equation}ig) (-tR_{x^1}) \\
&\stackrel{r\leftrightarrow c}{=} \begin{equation}ig[ \nablala_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}X^{\varepsilon}(0) - \lim_{s\rightarrow 0-} \nablala_{v}X(s) \begin{equation}ig) \begin{equation}ig]^{T} \\
&\quad + (-tR_{x^1}) \begin{equation}ig( \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) \lim_{s\rightarrow 0-}\nablala_{x}X(s) + \nablala_{vx}f_{0}(x^{1}, R_{x^1}v)\lim_{s\rightarrow 0-}\nablala_{x}V(s) \begin{equation}ig) \hat{r}_{2} \\
&=
\underbrace{ \begin{equation}ig[ \nablala_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}X(0;t, x+\varepsilonsilon \hat{r}_{2}, v) - \lim_{s\rightarrow 0-} \nablala_{v}X(s) \begin{equation}ig) \begin{equation}ig]^{T} }_{:=(*)_{xv,1}\hat{r}_{2} } \\
&\quad
+ (-tR_{x^1})\big[ \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^1} + \nablala_{vx}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \big] \hat{r}_{2}, \\
\end{split}
\end{equation*}
and
\begin{equation}gin{equation*}
\begin{equation}gin{split}
I_{xv,2} &:= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{v}V^{\varepsilon}(0) - \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s\rightarrow 0-}\nablala_{v}V(s) \\
&\quad\quad\quad\quad + \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) (R_{x^{1}} + 2tA_{v,x^{1}}) - \nablala_v f_0(X(0),R_{x^1}v) (R_{x^1} + 2tA_{v, x^{1}})\begin{equation}ig\} \\
&= \nablala_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}V^{\varepsilon}(0) - \lim_{s\rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig) \\
&\quad\quad\quad\quad + \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) - \nablala_{v}f_{0}(x^{1}, R_{x^1}v) \begin{equation}ig) (R_{x^{1}} + 2tA_{v,x^{1}}) \\
&\stackrel{r\leftrightarrow c}{=} \begin{equation}ig[ \nablala_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}V^{\varepsilon}(0) - \lim_{s\rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig) \begin{equation}ig]^{T} \\
&\quad\quad\quad\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}})\begin{equation}ig( \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) \lim_{s\rightarrow 0-} \nablala_{x}X(s) + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v)\lim_{s\rightarrow 0-}\nablala_{x}V(s) \begin{equation}ig) \hat{r}_{2} \\
&=
\underbrace{ \begin{equation}ig[ \nablala_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}V(0;t,x+\varepsilonsilon \hat{r}_{2}, v) - \lim_{s\rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig) \begin{equation}ig]^{T} }_{:=(*)_{xv,2} \hat{r}_{2}} \\
| 3,681 | 129,481 |
en
|
train
|
0.4976.15
|
&\quad
+ (-tR_{x^1})\big[ \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^1} + \nablala_{vx}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \big] \hat{r}_{2}, \\
\end{split}
\end{equation*}
and
\begin{equation}gin{equation*}
\begin{equation}gin{split}
I_{xv,2} &:= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{v}V^{\varepsilon}(0) - \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s\rightarrow 0-}\nablala_{v}V(s) \\
&\quad\quad\quad\quad + \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) (R_{x^{1}} + 2tA_{v,x^{1}}) - \nablala_v f_0(X(0),R_{x^1}v) (R_{x^1} + 2tA_{v, x^{1}})\begin{equation}ig\} \\
&= \nablala_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}V^{\varepsilon}(0) - \lim_{s\rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig) \\
&\quad\quad\quad\quad + \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) - \nablala_{v}f_{0}(x^{1}, R_{x^1}v) \begin{equation}ig) (R_{x^{1}} + 2tA_{v,x^{1}}) \\
&\stackrel{r\leftrightarrow c}{=} \begin{equation}ig[ \nablala_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}V^{\varepsilon}(0) - \lim_{s\rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig) \begin{equation}ig]^{T} \\
&\quad\quad\quad\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}})\begin{equation}ig( \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) \lim_{s\rightarrow 0-} \nablala_{x}X(s) + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v)\lim_{s\rightarrow 0-}\nablala_{x}V(s) \begin{equation}ig) \hat{r}_{2} \\
&=
\underbrace{ \begin{equation}ig[ \nablala_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}V(0;t,x+\varepsilonsilon \hat{r}_{2}, v) - \lim_{s\rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig) \begin{equation}ig]^{T} }_{:=(*)_{xv,2} \hat{r}_{2}} \\
&\quad\quad\quad\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \big[ \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v, x^{1}}) \big] \hat{r}_{2}. \\
\end{split}
\end{equation*}
Now we compute two underbraced $(*)_{xv,1}$ and $(*)_{xv,2}$ \\
\begin{equation}gin{equation} \label{xv star1}
\begin{equation}gin{split}
(*)_{xv,1} \hat{r}_{2} &= \begin{equation}ig[ \nablala_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}X^{\varepsilon}(0) - \lim_{s\rightarrow 0-} \nablala_{v}X(s) \begin{equation}ig) \begin{equation}ig]^{T} \\
&=
\begin{equation}ig[ \nablala_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{s \rightarrow 0-}\nablala_{x}(\partial_{v_{1}}X(s)) \hat{r}_{2}, \nablala_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{s \rightarrow 0-}\nablala_{x}(\partial_{v_{2}}X(s)) \hat{r}_{2} \begin{equation}ig]^{T} \\
&= \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{s \rightarrow 0-}\nablala_{x}(\partial_{v_{1}}X(s))
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{s \rightarrow 0-}\nablala_{x}(\partial_{v_{2}}X(s))
\end{bmatrix}
\hat{r}_{2} \\
&= \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-t R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-t R_{x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2}. \\
\end{split}
\end{equation}
Similarly,
\begin{equation}gin{equation} \label{xv star2}
\begin{equation}gin{split}
(*)_{xv,2} \hat{r}_{2}
&= \begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \lim_{s \rightarrow 0-}\nablala_{x}(\partial_{v_{1}}V(s))
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \lim_{s \rightarrow 0-}\nablala_{x}(\partial_{v_{2}}V(s))
\end{bmatrix}
\hat{r}_{2} \\
&= \begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1 + 2t A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2 + 2t A_{v,x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2}, \\
\end{split}
\end{equation}
where $A^i$ means $i$th column of matrix $A$. Therefore,
\begin{equation}gin{equation} \label{nabla_xv f case2}
\begin{equation}gin{split}
\nablala_{xv}f(t,x,v)
&= \underline{(*)_{xv,1}}_{\eqref{xv star1}} + \underline{(*)_{xv,2}}_{\eqref{xv star2}} \\
&\quad + (-tR_{x^1})\big[ \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^1} + \nablala_{vx}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \big] \\
&\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \big[ \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v, x^{1}}) \big]. \\
\end{split}
\end{equation}
From \eqref{nabla_xv f case1} and \eqref{nabla_xv f case2}, we get the following compatibility condition
\begin{equation}gin{equation} \label{xv comp}
\begin{equation}gin{split}
&(-t)\nablala_{xx}f_{0}(x^{1},v) + \nablala_{xv}f_{0}(x^{1},v) \\
&= \underline{(*)_{xv,1}}_{\eqref{xv star1}} + \underline{(*)_{xv,2}}_{\eqref{xv star2}} \\
&\quad + (-tR_{x^1}) \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^1} + (-tR_{x^1})\nablala_{vx}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \\
&\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}}
+ (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v, x^{1}}) .
\end{split}
\end{equation}
| 2,748 | 129,481 |
en
|
train
|
0.4976.16
|
\subsection{Condition for $\nablala_{vv}$}
We split perturbed direction into \eqref{set R_vel}. $\nablala_{v}f(t,x,v)$ can be written as \eqref{c_1} or \eqref{c_2}. Using \eqref{c_1}, $\hat{r}_{1}$ of \eqref{set R_vel}, and notation \eqref{XV epsilon v}, \\
\begin{equation}gin{equation} \label{nabla_vv f case1}
\begin{equation}gin{split}
&\nablala_{vv} f(t,x,v) \hat{r}_1 \\
&\stackrel{c\leftrightarrow r}{=} \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ -t\big[ \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) - \nablala_{x}f_{0}(X(0), v) \big]
+
\big[ \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) - \nablala_{v}f_{0}(X(0), v) \big] \begin{equation}ig\} \\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ -t\big[ \nablala_{x}f_{0}(X^{\varepsilon}(0), v+\varepsilonsilon \hat{r}_{1} ) - \nablala_{x}f_{0}(X(0), v) \big]
+
\big[ \nablala_{v}f_{0}(X^{\varepsilon}(0), v+\varepsilonsilon \hat{r}_{1} ) - \nablala_{v}f_{0}(X(0), v) \big] \begin{equation}ig\} \\
&\stackrel{r\leftrightarrow c}{=} \begin{equation}ig[ -t \nablala_{xx}f_{0}(X(0),v)\lim_{s\rightarrow 0+} \nablala_{v}X(s) -t \nablala_{vx}f_{0}(X(0),v)\lim_{s\rightarrow 0+} \nablala_{v}V(s) \\
&\quad + \nablala_{xv}f_{0}(X(0),v)\lim_{s\rightarrow 0+}\nablala_{v}X(s) + \nablala_{vv}f_{0}(X(0),v)\lim_{s\rightarrow 0+}\nablala_{v}V(s) \begin{equation}ig] \hat{r}_1 \\
&= \begin{equation}ig[ (-t ) \nablala_{xx}f_{0}(x^{1},v) (-t) + (-t )\nablala_{vx}f_{0}(x^{1},v) + \nablala_{xv}f_{0}(x^{1},v) (-t ) + \nablala_{vv}f_{0}(x^{1},v) \begin{equation}ig] \hat{r}_1,
\end{split}
\end{equation}
where we have used \eqref{nabla XV_v+} and note that we have $\nablala_{v}X^{\varepsilon}(0)=-t I_{2}$ and $\nablala_{v}V^{\varepsilon}(0)= I_{2}$ for $v+\varepsilon\hat{r}_1$ case also. \\
Similarly, using \eqref{c_2}, $\hat{r}_{2}$ of \eqref{set R_vel}, and notation \eqref{XV epsilon v},
\begin{equation}gin{equation*}
\begin{equation}gin{split}
&\nablala_{vv} f(t,x,v) \hat{r}_{2} \\
&\stackrel{c\leftrightarrow r}{=} \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{\nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{v}X^{\varepsilon}(0) + t \nablala_x f_0(X(0),R_{x^1}v) R_{x^{1}} \begin{equation}ig\} \\
&\quad + \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{v}V^{\varepsilon}(0) - \nablala_v f_0(X(0),R_{x^1}v) (R_{x^1} + 2tA_{v, x^{1}}) \begin{equation}ig\} \\
&:= I_{vv,1} + I_{vv,2},
\end{split}
\end{equation*}
and each $I_{vv,1},I_{vv,2}$ are estimated by
\begin{equation}gin{equation*}
\begin{equation}gin{split}
I_{vv,1} &:= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{\nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{v}X^{\varepsilon}(0) - \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s\rightarrow 0-}\nablala_{v}X(s) \\
&\quad\quad \quad\quad\quad + \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s\rightarrow 0-} \nablala_{v}X(s) + t \nablala_x f_0(X(0),R_{x^1}v) R_{x^{1}} \begin{equation}ig\} ,\quad \lim_{s\rightarrow 0-} \nablala_{v}X(s) = -tR_{x^{1}}, \\
&\stackrel{r \leftrightarrow c}{= } \begin{equation}ig[ \nablala_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}X^{\varepsilon}(0) - \lim_{s\rightarrow 0-} \nablala_{v}X(s) \begin{equation}ig) \begin{equation}ig]^{T} \\
&\quad + (-tR_{x^1}) \begin{equation}ig( \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) \lim_{s\rightarrow 0-}\nablala_{v}X(s) + \nablala_{vx}f_{0}(x^{1}, R_{x^1}v)\lim_{s\rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig) \hat{r}_{2} \\
&=
\underbrace{ \begin{equation}ig[ \nablala_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}X(0; t, x, v+\varepsilonsilon \hat{r}_{2}) - \lim_{s\rightarrow 0-} \nablala_{v}X(s) \begin{equation}ig) \begin{equation}ig]^{T} }_{(*)_{vv,1}\hat{r}_{2} } \\
&\quad + (-tR_{x^1}) \big[ \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^1}) + \nablala_{vx}f_{0}(x^{1}, R_{x^1}v) (R_{x^1} + 2tA_{v,x^{1}}) \big] \hat{r}_{2}, \\
\end{split}
\end{equation*}
\begin{equation}gin{equation*}
\begin{equation}gin{split}
I_{vv,2} &:= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{v}V^{\varepsilon}(0) - \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s\rightarrow 0-}\nablala_{v}V(s) \\
&\quad\quad\quad\quad + \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) (R_{x^{1}} + 2tA_{v,x^{1}}) - \nablala_v f_0(X(0),R_{x^1}v) (R_{x^1} + 2tA_{v, x^{1}}) \begin{equation}ig\} \\
&\stackrel{r\leftrightarrow c}{=} \begin{equation}ig[ \nablala_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}V^{\varepsilon}(0) - \lim_{s\rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig) \begin{equation}ig]^{T} \\
&\quad\quad\quad\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \begin{equation}ig( \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) \lim_{s\rightarrow 0-} \nablala_{v}X(s) + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v)\lim_{s\rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig) \hat{r}_{2} \\
&=
\underbrace{ \begin{equation}ig[ \nablala_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}V(0; t, x, v+\varepsilonsilon \hat{r}_{2}) - \lim_{s\rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig) \begin{equation}ig]^{T} }_{(*)_{vv,2} \hat{r}_{2} } \\
&\quad\quad\quad\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \big[ \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^1}) + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) (R_{x^{1}} + 2tA_{v,x^{1}}) \big] \hat{r}_{2}. \\
\end{split}
\end{equation*}
Similar to \eqref{xv star1} and \eqref{xv star2}, using Lemma \e^{\frac 12}f{nabla xv b},
\begin{equation}gin{equation} \label{vv star1}
\begin{equation}gin{split}
(*)_{vv,1} \hat{r}_{2}
&= \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}(-t R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}(-t R_{x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2}
=
t^{2}
\begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2}, \\
\end{split}
\end{equation}
\begin{equation}gin{equation} \label{vv star2}
\begin{equation}gin{split}
(*)_{vv,2} \hat{r}_{2}
| 3,256 | 129,481 |
en
|
train
|
0.4976.17
|
&\quad + (-tR_{x^1}) \big[ \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^1}) + \nablala_{vx}f_{0}(x^{1}, R_{x^1}v) (R_{x^1} + 2tA_{v,x^{1}}) \big] \hat{r}_{2}, \\
\end{split}
\end{equation*}
\begin{equation}gin{equation*}
\begin{equation}gin{split}
I_{vv,2} &:= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{v}V^{\varepsilon}(0) - \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s\rightarrow 0-}\nablala_{v}V(s) \\
&\quad\quad\quad\quad + \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) (R_{x^{1}} + 2tA_{v,x^{1}}) - \nablala_v f_0(X(0),R_{x^1}v) (R_{x^1} + 2tA_{v, x^{1}}) \begin{equation}ig\} \\
&\stackrel{r\leftrightarrow c}{=} \begin{equation}ig[ \nablala_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}V^{\varepsilon}(0) - \lim_{s\rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig) \begin{equation}ig]^{T} \\
&\quad\quad\quad\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \begin{equation}ig( \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) \lim_{s\rightarrow 0-} \nablala_{v}X(s) + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v)\lim_{s\rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig) \hat{r}_{2} \\
&=
\underbrace{ \begin{equation}ig[ \nablala_{v}f_{0}(x^{1}, R_{x^1}v)\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{v}V(0; t, x, v+\varepsilonsilon \hat{r}_{2}) - \lim_{s\rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig) \begin{equation}ig]^{T} }_{(*)_{vv,2} \hat{r}_{2} } \\
&\quad\quad\quad\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \big[ \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^1}) + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) (R_{x^{1}} + 2tA_{v,x^{1}}) \big] \hat{r}_{2}. \\
\end{split}
\end{equation*}
Similar to \eqref{xv star1} and \eqref{xv star2}, using Lemma \e^{\frac 12}f{nabla xv b},
\begin{equation}gin{equation} \label{vv star1}
\begin{equation}gin{split}
(*)_{vv,1} \hat{r}_{2}
&= \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}(-t R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}(-t R_{x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2}
=
t^{2}
\begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2}, \\
\end{split}
\end{equation}
\begin{equation}gin{equation} \label{vv star2}
\begin{equation}gin{split}
(*)_{vv,2} \hat{r}_{2}
&= \begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}(R_{x^{1}(x,v)}^1 + 2t A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}(R_{x^{1}(x,v)}^2 + 2t A_{v,x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2} \\
&=
-t
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2}
-
2t^{2}
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(A_{v,x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2}
\\
&\quad +
2t
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}A^{1}_{v,x^{1}}
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}A^{2}_{v,x^{1}}
\end{bmatrix}
\hat{r}_{2}.
\\
\end{split}
\end{equation}
Hence, we get
\begin{equation}gin{equation} \label{nabla_vv f case2}
\begin{equation}gin{split}
&\nablala_{vv} f(t,x,v) \\
&= \underline{(*)_{vv,1}}_{\eqref{vv star1}} + \underline{(*)_{vv,2}}_{\eqref{vv star2}} \\
&\quad + (-tR_{x^1}) \big[ \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^1}) + \nablala_{vx}f_{0}(x^{1}, R_{x^1}v) (R_{x^1} + 2tA_{v,x^{1}}) \big] \\
&\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \big[ \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^1}) + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) (R_{x^{1}} + 2tA_{v,x^{1}}) \big]. \\
\end{split}
\end{equation}
Then from \eqref{nabla_vv f case1} and \eqref{nabla_vv f case2} we get the following compatibility condition.
\begin{equation}gin{equation} \label{vv comp}
\begin{equation}gin{split}
&(-t ) \nablala_{xx}f_{0}(x^{1},v) (-t) + (-t )\nablala_{vx}f_{0}(x^{1},v) + \nablala_{xv}f_{0}(x^{1},v) (-t ) + \nablala_{vv}f_{0}(x^{1},v) \\
&= \underline{(*)_{vv,1}}_{\eqref{vv star1}} + \underline{(*)_{vv,2}}_{\eqref{vv star2}} \\
&\quad + (-tR_{x^1}) \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^1}) + (-tR_{x^1})\nablala_{vx}f_{0}(x^{1}, R_{x^1}v) (R_{x^1} + 2tA_{v,x^{1}}) \\
&\quad + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^1}) + (R_{x^{1}} + 2tA^{T}_{v,x^{1}}) \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) (R_{x^{1}} + 2tA_{v,x^{1}}). \\
\end{split}
\end{equation}
| 2,605 | 129,481 |
en
|
train
|
0.4976.18
|
\subsection{Condition for $\nablala_{xx}$}
We split perturbed direction into \eqref{set R_sp}. $\nablala_{x}f(t,x,v)$ can be written as \eqref{c_3} or \eqref{c_4}, which are identical due to \eqref{c_v}. Using \eqref{c_3}, $\hat{r}_{1}$ of \eqref{set R_sp}, and notation \eqref{XV epsilon x}, \\
\begin{equation}gin{equation} \label{nabla_xx f case1}
\begin{equation}gin{split}
&\nablala_{xx} f(t,x,v) \hat{r}_1 \stackrel{c \leftrightarrow r}{=} \lim _{\varepsilonsilon\rightarrow 0+} \frac{1}{\varepsilonsilon}\left ( \nablala_{x}f(t,x+\varepsilonsilon \hat{r}_1,v) - \nablala_{x}f(t,x,v) \right ) \\
&= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon}\begin{equation}ig( \nablala_{x}\big[ f_0(X(0;t,x+\varepsilonsilon \hat{r}_1,v),V(0;t,x+\varepsilonsilon \hat{r}_1,v)) \big] - \nablala_{x}f_{0}(X(0),v) \begin{equation}ig) \\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}X^{\varepsilon}(0) + \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \underbrace{\nablala_{x}V^{\varepsilon}(0)}_{=0} - \nablala_{x}f_{0}(X(0),v) \begin{equation}ig\} \\
&\stackrel{r \leftrightarrow c}{=} \nablala_{xx}f_{0}(x^{1},v) \lim_{s \rightarrow 0+}\nablala_{x}X(s) \hat{r}_1
= \nablala_{xx}f_{0}(x^{1},v) \hat{r}_1,
\end{split}
\end{equation}
where we have used \eqref{nabla XV_v+}, \eqref{nabla XV_x+}, $\nablala_{x}X^{\varepsilon}(0) = I_{2}$, and $\nablala_{x}V^{\varepsilon}(0)= 0$. Similarly, using \eqref{c_4}, $\hat{r}_{2}$ of \eqref{set R_sp}, and notation \eqref{XV epsilon x}, \\
\begin{equation}gin{equation} \notag
\begin{equation}gin{split}
&\nablala_{xx} f(t,x,v) \hat{r}_2 \stackrel{c \leftrightarrow r}{=} \lim _{\varepsilonsilon\rightarrow 0+} \frac{1}{\varepsilonsilon}\left ( \nablala_{x}f(t,x+\varepsilonsilon \hat{r}_2,v) - \nablala_{x}f(t,x,v) \right )\\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}X^{\varepsilon}(0) + \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}V^{\varepsilon}(0) \\
&\quad\quad\quad - \big( \nablala_x f_0(X(0), R_{x^1}v) R_{x^{1}} - 2\nablala_v f_0(X(0),R_{x^1}v) A_{v,x^{1}} \big) \begin{equation}ig\} \\
&= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}X^{\varepsilon}(0) - \nablala_x f_0(X(0), R_{x^1}v) R_{x^{1}} \begin{equation}ig\} \\
&\quad + \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}V^{\varepsilon}(0) + 2\nablala_v f_0(X(0),R_{x^1}v) A_{v,x^{1}} \begin{equation}ig\} \\
&:= I_{xx,1} + I_{xx,2} ,
\end{split}
\end{equation}
where
\begin{equation}gin{equation*}
\begin{equation}gin{split}
I_{xx,1} &:= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}X^{\varepsilon}(0) - \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s \rightarrow 0-}\nablala_{x}X(s) \\
&\quad + \begin{equation}ig( \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) R_{x^{1}} - \nablala_x f_0(X(0), R_{x^1}v) R_{x^{1}} \begin{equation}ig) \begin{equation}ig\} \\
&\stackrel{r \leftrightarrow c}{=}
\underbrace{ \begin{equation}ig[ \nablala_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{x}X(0; t, x+\varepsilonsilon\hat{r}_{2}, v) - \lim_{s \rightarrow 0-}\nablala_{x}X(s) \begin{equation}ig) \begin{equation}ig]^{T} }_{(*)_{xx,1}\hat{r}_{2}} \\
&\quad + R_{x^{1}} \big[ \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + \nablala_{vx}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \big] \hat{r}_{2},
\end{split}
\end{equation*}
\begin{equation}gin{equation*}
\begin{equation}gin{split}
I_{xx,2} &:= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}V^{\varepsilon}(0) - \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0))\lim_{s \rightarrow 0-}\nablala_{x}V(s) \\
&\quad - 2\nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) A_{v,x^{1}} + 2\nablala_v f_0(X(0),R_{x^1}v) A_{v,x^{1}} \begin{equation}ig\} \\
&\stackrel{r\leftrightarrow c}{=} \begin{equation}ig[ \nablala_{v}f_{0}(x^{1}, R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{x}V^{\varepsilon}(0) - \lim_{s \rightarrow 0-}\nablala_{x}V(s) \begin{equation}ig) \begin{equation}ig]^{T} \\
&\quad + (- 2A^{T}_{v,x^{1}}) \begin{equation}ig\{ \nablala_{xv}f_{0}(x^{1}, R_{x^1}v)\lim_{s \rightarrow 0-}\nablala_{x}X(s) + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v)\lim_{s \rightarrow 0-}\nablala_{x}V(s) \begin{equation}ig\} \hat{r}_{2} \\
&= \underbrace{ \begin{equation}ig[ \nablala_{v}f_{0}(x^{1}, R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{x}V(0; t, x+\varepsilonsilon\hat{r}_{2}, v) - \lim_{s \rightarrow 0-}\nablala_{x}V(s) \begin{equation}ig) \begin{equation}ig]^{T} }_{(*)_{xx,2}\hat{r}_{2}} \\
&\quad + (- 2A^{T}_{v,x^{1}}) \big[ \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v)(-2 A_{v,x^{1}}) \big] \hat{r}_{2}. \\
\end{split}
\end{equation*}
Similar to \eqref{xv star1} and \eqref{xv star2},
\begin{equation}gin{equation} \label{xx star1}
\begin{equation}gin{split}
(*)_{xx,1} \hat{r}_{2}
&= \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2}, \\
\end{split}
\end{equation}
\begin{equation}gin{equation} \label{xx star2}
\begin{equation}gin{split}
(*)_{xx,2} \hat{r}_{2}
&= \begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(- 2 A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(- 2 A_{v,x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2}. \\
\end{split}
\end{equation}
Hence,
\begin{equation}gin{equation} \label{nabla_xx f case2}
\begin{equation}gin{split}
&\nablala_{xx} f(t,x,v) \\
&= \underline{(*)_{xx,1}}_{\eqref{xx star1}} + \underline{(*)_{xx,2}}_{\eqref{xx star2}} \\
&\quad + R_{x^{1}} \big[ \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + \nablala_{vx}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \big] \\
&\quad + (- 2A^{T}_{v,x^{1}}) \big[ \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v)(-2 A_{v,x^{1}}) \big].
\end{split}
\end{equation}
Then, from \eqref{nabla_xx f case1} and \eqref{nabla_xx f case2}, we get the following compatibility condition
\begin{equation}gin{equation} \label{xx comp}
\begin{equation}gin{split}
&\nablala_{xx}f_{0}(x^{1},v) \\
&= \underline{(*)_{xx,1}}_{\eqref{xx star1}} + \underline{(*)_{xx,2}}_{\eqref{xx star2}} \\
&\quad + R_{x^{1}} \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + R_{x^{1}} \nablala_{vx}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \\
&\quad + (- 2A^{T}_{v,x^{1}}) \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + (- 2A^{T}_{v,x^{1}}) \nablala_{vv}f_{0}(x^{1}, R_{x^1}v)(-2 A_{v,x^{1}}). \\
\end{split}
\end{equation}
| 3,517 | 129,481 |
en
|
train
|
0.4976.19
|
\subsection{Condition for $\nablala_{vx}$} We split perturbed direction into \eqref{set R_vel}. $\nablala_{x}f(t,x,v)$ can be written as \eqref{c_3} or \eqref{c_4}. Using \eqref{c_3}, $\hat{r}_{1}$ of \eqref{set R_vel}, and notation \eqref{XV epsilon v}, \\
\begin{equation}gin{equation} \label{nabla_vx f case1}
\begin{equation}gin{split}
&\nablala_{vx} f(t,x,v) \hat{r}_1 \stackrel{c\leftrightarrow r}{=} \lim _{\varepsilonsilon\rightarrow 0+} \frac{1}{\varepsilonsilon}\left ( \nablala_{x}f(t,x,v+\varepsilonsilon \hat{r}_1) - \nablala_{x}f(t,x,v) \right ) \\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon}\begin{equation}ig( \nablala_{x}\big[ f_0(X(0;t,x ,v+\varepsilonsilon \hat{r}_1),V(0;t,x ,v+\varepsilonsilon \hat{r}_1)) \big] - \nablala_{x}f_{0}(X(0),v) \begin{equation}ig) \\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}X^{\varepsilon}(0) + \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}V^{\varepsilon}(0) - \nablala_{x}f_{0}(X(0),v) \begin{equation}ig\} \\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{x}f_{0}(X^{\varepsilon}(0), v+\varepsilonsilon \hat{r}_{1} ) \nablala_{x}X^{\varepsilon}(0) + \nablala_{v}f_{0}(X^{\varepsilon}(0), v+\varepsilonsilon \hat{r}_{1} ) \underbrace{ \nablala_{x}V^{\varepsilon}(0) }_{=0} - \nablala_{x}f_{0}(X(0),v) \begin{equation}ig\} \\
&\stackrel{r\leftrightarrow c}{=} \nablala_{xx}f_{0}(x^{1}, v)\lim_{s \rightarrow 0+}\nablala_{v}X(s)\hat{r}_{1} + \nablala_{vx}f_{0}(x^{1}, v) \lim_{s \rightarrow 0+}\nablala_{v}V(s)\hat{r}_{1} \\
&= \big( \nablala_{xx}f_{0}(x^{1}, v)(-t) + \nablala_{vx}f_{0}(x^{1}, v) \big) \hat{r}_{1} , \\
\end{split}
\end{equation}
where we have used \eqref{nabla XV_v+}, \eqref{nabla XV_x+}, $\nablala_{x}X^{\varepsilon}(0) = I_{2}$, and $\nablala_{x}V^{\varepsilon}(0)= 0$. Similalry, using \eqref{c_4}, $\hat{r}_{2}$ of \eqref{set R_vel}, and notation \eqref{XV epsilon v}, \\
\begin{equation}gin{equation} \notag
\begin{equation}gin{split}
&\nablala_{vx} f(t,x,v) \hat{r}_2 \stackrel{c\leftrightarrow r}{=} \lim _{\varepsilonsilon\rightarrow 0+} \frac{1}{\varepsilonsilon}\left ( \nablala_{x}f(t,x,v+\varepsilonsilon \hat{r}_2) - \nablala_{x}f(t,x,v) \right )\\
| 1,077 | 129,481 |
en
|
train
|
0.4976.20
|
&\stackrel{r\leftrightarrow c}{=} \nablala_{xx}f_{0}(x^{1}, v)\lim_{s \rightarrow 0+}\nablala_{v}X(s)\hat{r}_{1} + \nablala_{vx}f_{0}(x^{1}, v) \lim_{s \rightarrow 0+}\nablala_{v}V(s)\hat{r}_{1} \\
&= \big( \nablala_{xx}f_{0}(x^{1}, v)(-t) + \nablala_{vx}f_{0}(x^{1}, v) \big) \hat{r}_{1} , \\
\end{split}
\end{equation}
where we have used \eqref{nabla XV_v+}, \eqref{nabla XV_x+}, $\nablala_{x}X^{\varepsilon}(0) = I_{2}$, and $\nablala_{x}V^{\varepsilon}(0)= 0$. Similalry, using \eqref{c_4}, $\hat{r}_{2}$ of \eqref{set R_vel}, and notation \eqref{XV epsilon v}, \\
\begin{equation}gin{equation} \notag
\begin{equation}gin{split}
&\nablala_{vx} f(t,x,v) \hat{r}_2 \stackrel{c\leftrightarrow r}{=} \lim _{\varepsilonsilon\rightarrow 0+} \frac{1}{\varepsilonsilon}\left ( \nablala_{x}f(t,x,v+\varepsilonsilon \hat{r}_2) - \nablala_{x}f(t,x,v) \right )\\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}X^{\varepsilon}(0) + \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}V^{\varepsilon}(0) \\
&\quad\quad\quad - \big( \nablala_x f_0(X(0), R_{x^1}v) R_{x^{1}} - 2\nablala_v f_0(X(0),R_{x^1}v) A_{v,x^{1}} \big) \begin{equation}ig\} \\
&= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}X^{\varepsilon}(0) - \nablala_x f_0(X(0), R_{x^1}v) R_{x^{1}} \begin{equation}ig\} \\
&\quad + \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}V^{\varepsilon}(0) + 2\nablala_v f_0(X(0),R_{x^1}v) A_{v,x^{1}} \begin{equation}ig\} \\
&:= I_{vx,1} + I_{vx,2},
\end{split}
\end{equation}
where
\begin{equation}gin{equation*}
\begin{equation}gin{split}
I_{vx,1} &:= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}X^{\varepsilon}(0) - \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \lim_{s \rightarrow 0-}\nablala_{x}X(s) \\
&\quad + \begin{equation}ig( \nablala_{x}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) R_{x^{1}} - \nablala_x f_0(X(0), R_{x^1}v) R_{x^{1}} \begin{equation}ig) \begin{equation}ig\} \\
&\stackrel{r\leftrightarrow c}{=} \begin{equation}ig[ \nablala_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{x}X^{\varepsilon}(0) - \lim_{s \rightarrow 0-}\nablala_{x}X(s) \begin{equation}ig) \begin{equation}ig]^{T} \\
&\quad + R_{x^{1}} \begin{equation}ig\{ \nablala_{xx}f_{0}(x^{1}, R_{x^1}v)\lim_{s \rightarrow 0-}\nablala_{v}X(s) + \nablala_{vx}f_{0}(x^{1}, R_{x^1}v)\lim_{s \rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig\} \hat{r}_{2} \\
&= \underbrace{ \begin{equation}ig[ \nablala_{x}f_{0}(x^{1}, R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{x}X(0; t, x, v+\varepsilonsilon \hat{r}_{2}) - \lim_{s \rightarrow 0-}\nablala_{x}X(s) \begin{equation}ig) \begin{equation}ig]^{T} }_{(*)_{vx,1}
\hat{r}_{2} } \\
&\quad + R_{x^{1}} \big[ \nablala_{xx}f_{0}(x^{1}, R_{x^1}v)(-tR_{x^1}) + \nablala_{vx}f_{0}(x^{1}, R_{x^1}v) (R_{x^{1}} + 2tA_{v,x^{1}}) \big] \hat{r}_{2},
\end{split}
\end{equation*}
\begin{equation}gin{equation*}
\begin{equation}gin{split}
I_{vx,2} &:= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig\{ \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) \nablala_{x}V^{\varepsilon}(0) - \nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0))\lim_{s \rightarrow 0-}\nablala_{x}V(s) \\
&\quad - 2\nablala_{v}f_{0}(X^{\varepsilon}(0), V^{\varepsilon}(0)) A_{v,x^{1}} + 2\nablala_v f_0(X(0),R_{x^1}v) A_{v,x^{1}} \begin{equation}ig\} \\
&\stackrel{r\leftrightarrow c}{=} \begin{equation}ig[ \nablala_{v}f_{0}(x^{1}, R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{x}V^{\varepsilon}(0) - \lim_{s \rightarrow 0-}\nablala_{x}V(s) \begin{equation}ig) \begin{equation}ig]^{T} \\
&\quad + (-2A^{T}_{v,x^{1}}) \begin{equation}ig\{ \nablala_{xv}f_{0}(x^{1}, R_{x^1}v)\lim_{s \rightarrow 0-}\nablala_{v}X(s) + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v)\lim_{s \rightarrow 0-}\nablala_{v}V(s) \begin{equation}ig\} \hat{r}_{2} \\
&= \underbrace{ \begin{equation}ig[ \nablala_{v}f_{0}(x^{1}, R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \begin{equation}ig( \nablala_{x}V(0; t, x, v+\varepsilonsilon \hat{r}_{2}) - \lim_{s \rightarrow 0-}\nablala_{x}V(s) \begin{equation}ig) \begin{equation}ig]^{T} }_{(*)_{vx,2}\hat{r}_{2}} \\
&\quad + (-2A^{T}_{v,x^{1}}) \big[ \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^{1}}) + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) ( R_{x^{1}} + 2tA_{v,x^{1}}) \big] \hat{r}_{2}.
\end{split}
\end{equation*}
Similar to \eqref{xv star1} and \eqref{xv star2},
\begin{equation}gin{equation} \label{vx star1}
\begin{equation}gin{split}
(*)_{vx,1} \hat{r}_{2}
&= \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}(R_{x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2}
= -t
\begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2},
\\
\end{split}
\end{equation}
\begin{equation}gin{equation} \label{vx star2}
\begin{equation}gin{split}
(*)_{vx,2} \hat{r}_{2}
&= \begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}(- 2 A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}(- 2 A_{v,x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2} \\
&=
2t
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(A_{v,x^{1}(x,v)}^2)
\end{bmatrix}
\hat{r}_{2}
-2
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}A^{1}_{v,x^{1}}
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}A^{2}_{v,x^{1}}
\end{bmatrix}
\hat{r}_{2}.
\\
\end{split}
\end{equation}
Hence,
\begin{equation}gin{equation} \label{nabla_vx f case2}
\begin{equation}gin{split}
&\nablala_{vx} f(t,x,v) \\
&= \underline{(*)_{vx,1}}_{\eqref{vx star1}} + \underline{(*)_{vx,2}}_{\eqref{vx star2}} \\
&\quad + R_{x^{1}} \big[ \nablala_{xx}f_{0}(x^{1}, R_{x^1}v)(-tR_{x^1}) + \nablala_{vx}f_{0}(x^{1}, R_{x^1}v) (R_{x^{1}} + 2tA_{v,x^{1}}) \big] \\
&\quad + (-2A^{T}_{v,x^{1}}) \big[ \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^{1}}) + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) ( R_{x^{1}} + 2tA_{v,x^{1}}) \big].
\end{split}
\end{equation}
Then from \eqref{nabla_vx f case1} and \eqref{nabla_vx f case2} we get the following compatibility condition
\begin{equation}gin{equation} \label{vx comp}
\begin{equation}gin{split}
&\nablala_{xx}f_{0}(x^{1}, v)(-t) + \nablala_{vx}f_{0}(x^{1}, v) \\
&= \underline{(*)_{vx,1}}_{\eqref{vx star1}} + \underline{(*)_{vx,2}}_{\eqref{vx star2}} \\
&\quad + R_{x^{1}} \nablala_{xx}f_{0}(x^{1}, R_{x^1}v)(-tR_{x^1}) + R_{x^{1}} \nablala_{vx}f_{0}(x^{1}, R_{x^1}v) (R_{x^{1}} + 2tA_{v,x^{1}}) \\
&\quad + (-2A^{T}_{v,x^{1}}) \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) (-tR_{x^{1}}) + (-2A^{T}_{v,x^{1}}) \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) ( R_{x^{1}} + 2tA_{v,x^{1}}). \\
\end{split}
\end{equation}
| 3,788 | 129,481 |
en
|
train
|
0.4976.21
|
\subsection{Compatibility conditions for transpose : $\nablala_{xv}^{T} = \nablala_{vx}$ and $\nablala_{xx}^{T} = \nablala_{xx}$}
First, we claim that \eqref{xv comp}, \eqref{vv comp}, \eqref{xx comp}, and \eqref{vx comp} imply the following four conditions for $(x^{1}, v)\in \gamma_{-}$
\begin{equation}gin{eqnarray}
\nablala_{xv}f_{0}(x^{1},v)
&=& R_{x^{1}}\nablala_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + R_{x^{1}}\nablala_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v, x^{1}}) \notag \\
&&\quad + \begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1) \\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix} ,\quad x^{1}=x^{1}(x,v), \label{Cond2 1} \\
\nablala_{xx}f_{0}(x^{1},v)
&=& R_{x^{1}} \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + R_{x^{1}} \nablala_{vx}f_{0}(x^{1}, R_{x^1}v)(-2A_{v,x^{1}}) \notag \\
&&\quad + (-2A^{T}_{v,x^{1}}) \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + (-2A^{T}_{v,x^{1}}) \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \notag \\
&&\quad + \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix}
- 2
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(A_{v,x^{1}(x,v)}^2)
\end{bmatrix},
\label{Cond2 2} \\
\nablala_{vv}f_{0}(x^{1},v)
&=& R_{x^{1}}\nablala_{vv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}}, \label{Cond2 3} \\
\nablala_{vx}f_{0}(x^{1},v)
&=& R_{x^{1}}\nablala_{vx}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + (-2A^{T}_{v,x^{1}})\nablala_{vv}f_{0}(x^{1}, R_{x^1}v)R_{x^{1}} \notag \\
&&\quad -2
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}A^{1}_{v,x^{1}}
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}A^{2}_{v,x^{1}}
\end{bmatrix}. \label{Cond2 4}
\end{eqnarray}
\eqref{xx comp} is just identical to \eqref{Cond2 2}. Then applying \eqref{xx comp} to \eqref{vx comp} and \eqref{xv comp}, we obtain \eqref{Cond2 1} and \eqref{Cond2 4}, respectively. Finally, applying \eqref{Cond2 1}, \eqref{Cond2 2}, and \eqref{Cond2 4} to \eqref{vv comp}, we obtain \eqref{Cond2 3} which is true by taking $\nablala_{v}^{2}$ to \eqref{BC} directly. \\
From \eqref{Cond2 1}--\eqref{Cond2 4}, we must check conditions to guarantee necessary conditions, $\nablala_{xv}^{T} = \nablala_{vx}$ and $\nablala_{xx}^{T} = \nablala_{xx}$. \\
| 1,292 | 129,481 |
en
|
train
|
0.4976.22
|
\subsubsection{$\nablala_{xv}^T=\nablala_{vx}$}
From \eqref{Cond2 1} and \eqref{Cond2 4}, we need
\begin{equation} \label{T invariant}
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix}^{T}
=
-2
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}A^{1}_{v,x^{1}}
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}A^{2}_{v,x^{1}}
\end{bmatrix}.
\end{equation}
To check \eqref{T invariant}, we explicitly compute $\nablala_x(R_{x^1(x,v)}^1),\nablala_x(R_{x^1(x,v)}^2),\nablala_v(-2A^1_{v,x^1}),$ and $\nablala_v(-2A_{v,x^1}^2)$ in the following Lemma.
\begin{equation}gin{lemma} \label{d_RA}
Recall reflection operator $R_{x^1}$ in \eqref{BC} and $A_{v,x^1}$ in \eqref{def A},
\begin{equation}gin{equation*}
A_{v,x^1} := \left[ \left((v\cdotot n(x^1))I +(n(x^1)\otimes v)\right) \left(I-\frac{v\otimes n(x^1)}{v\cdotot n(x^1)}\right)\right].
\end{equation*}
We write that $A^i$ is the $i$th column of matrix $A$ and $\nablala_vA^i_{v,y}$ be the $v$-derivative of $A_{v,y}^i$ for $1\leq i \leq 2$ and $(v,y) \in \mathbb{R}^2 \times \partialartial \Omega$. Then,
\begin{equation}gin{align*}
&\nablala_x (R_{x^1(x,v)}^1) = \begin{equation}gin{bmatrix}
\mathrm{d}frac{-4v_2n_1n_2}{v\cdotot n(x^1)} & \mathrm{d}frac{4v_1n_1n_2}{v\cdotot n(x^1)} \\
\mathrm{d}frac{-2v_2(n_2^2-n_1^2)}{v\cdotot n(x^1)} & \mathrm{d}frac{2v_1(n_2^2-n_1^2)}{v\cdotot n(x^1)}
\end{bmatrix}, \quad
\nablala_x (R_{x^1(x,v)}^2)= \begin{equation}gin{bmatrix}
\mathrm{d}frac{-2v_2(n_2^2-n_1^2)}{v\cdotot n(x^1)} & \mathrm{d}frac{2v_1(n_2^2-n_1^2)}{v\cdotot n(x^1)}\\
\mathrm{d}frac{4v_2n_1n_2}{v\cdotot n(x^1)} & \mathrm{d}frac{-4v_1n_1n_2}{v\cdotot n(x^1)}
\end{bmatrix},\\
&\nablala_v(-2A_{v,x^1}^1)= \begin{equation}gin{bmatrix}
-\mathrm{d}frac{2v_2^2n_1}{(v\cdotot n(x^1))^2} & -2n_2-\mathrm{d}frac{2v_1^2n_1^2n_2}{(v\cdotot n(x^1))^2} + \mathrm{d}frac{4v_1v_2n_1^3}{(v\cdotot n(x^1))^2} + \mathrm{d}frac{2v_2^2n_1^2n_2}{(v\cdotot n(x^1))^2} \\
-\mathrm{d}frac{2v_2^2n_2}{(v\cdotot n(x^1))^2} & 2n_1 -\mathrm{d}frac{2v_1^2 n_1n_2^2}{(v\cdotot n(x^1))^2} + \mathrm{d}frac{4v_1v_2 n_1^2n_2}{(v\cdotot n(x^1))^2} + \mathrm{d}frac{2v_2^2 n_1n_2^2}{(v\cdotot n(x^1))^2}
\end{bmatrix},\\
&\nablala_v(-2A_{v,x^1}^2)= \begin{equation}gin{bmatrix}
2n_2+\mathrm{d}frac{2v_1^2n_1^2n_2}{(v\cdotot n(x^1))^2} + \mathrm{d}frac{4v_1v_2n_1n_2^2}{(v\cdotot n(x^1))^2} - \mathrm{d}frac{2v_2^2n_1^2n_2}{(v\cdotot n(x^1))^2} & -\mathrm{d}frac{2v_1^2n_1}{(v\cdotot n(x^1))^2} \\
-2n_1 -\mathrm{d}frac{2v_2^2 n_1n_2^2}{(v\cdotot n(x^1))^2} + \mathrm{d}frac{4v_1v_2 n_2^3}{(v\cdotot n(x^1))^2} + \mathrm{d}frac{2v_1^2 n_1n_2^2}{(v\cdotot n(x^1))^2} & -\mathrm{d}frac{2v_1^2n_2}{(v\cdotot n(x^1))^2}
\end{bmatrix},
\end{align*}
where $v_i$ be the $i$th component of $v$. We denote the $i$th component $n_i(x,v)$ of $n(x^1)$ as $n_i$, that is, $n_i$ depends on $x,v$. Moreover, the following identity holds that
\begin{equation}gin{equation}\label{prop d_R}
\nablala_{x}(R_{x^1(x,v)}^1)v =0, \quad \nablala_{x}(R_{x^1(x,v)}^2) v =0.
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
Recall the definition of the reflection matrix $R_{x^1}$ and $-2A_{v,x^1}$:
\begin{equation}gin{align*}
R_{x^1}&=I-2n(x^1)\otimes n(x^1) = \begin{equation}gin{bmatrix}
1-2n_1^2 & -2n_1n_2 \\
-2n_1n_2 & 1-2n_2^2
\end{bmatrix},\\
-2A_{v,x^1}&= -2 \left[ \left((v\cdotot n(x^1))I +(n(x^1)\otimes v)\right) \left(I-\frac{v\otimes n(x^1)}{v\cdotot n(x^1)}\right)\right]\\
&=\begin{equation}gin{bmatrix}
-2v_2n_2 -\mathrm{d}frac{2v_1v_2n_1n_2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_2^2 n_1^2}{ v\cdotot n(x^1)} & 2v_1n_2 + \mathrm{d}frac{2v_1^2n_1n_2}{v\cdotot n(x^1)} -\mathrm{d}frac{2v_1v_2n_1^2}{ v\cdotot n(x^1)} \\
2v_2n_1 -\mathrm{d}frac{2v_1v_2n_2^2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_2^2n_1n_2}{v\cdotot n(x^1)} & -2v_1n_1 -\mathrm{d}frac{2v_1v_2n_1n_2}{v\cdotot n(x^1)} + \mathrm{d}frac{2v_1^2n_2^2}{v \cdotot n(x^1)}
\end{bmatrix}.
\end{align*}
To find $\nablala_x (R_{x^1(x,v)}^1),\nablala_x (R_{x^1(x,v)}^2)$, we use \eqref{normal} in Lemma \e^{\frac 12}f{d_n}:
\begin{equation}gin{equation} \label{comp_dn}
\nablala_x [n(x^1(x,v))] = I-\frac{v\otimes n(x^1)}{ v\cdotot n(x^1)}= \begin{equation}gin{bmatrix}
\mathrm{d}frac{v_2n_2}{v\cdotot n(x^1)} & -\mathrm{d}frac{v_1n_2}{v\cdotot n(x^1)} \\
-\mathrm{d}frac{v_2n_1}{v \cdotot n(x^1)} & \mathrm{d}frac{v_1n_1}{v\cdotot n(x^1)}
\end{bmatrix}.
\end{equation}
Firstly, we directly calculate $\nablala_x (R_{x^1(x,v)}^1)$ and $\nablala_x(R_{x^1(x,v)}^2)$ using \eqref{comp_dn}:
\begin{equation}gin{align*}
\nablala_x (R_{x^1(x,v)}^1) = \nablala_x \begin{equation}gin{bmatrix}
1-2n_1^2 \\ -2n_1n_2
\end{bmatrix}=
\begin{equation}gin{bmatrix}
\mathrm{d}frac{-4v_2n_1n_2}{v\cdotot n(x^1)} & \mathrm{d}frac{4v_1n_1n_2}{v\cdotot n(x^1)} \\
\mathrm{d}frac{-2v_2(n_2^2-n_1^2)}{v\cdotot n(x^1)} & \mathrm{d}frac{2v_1(n_2^2-n_1^2)}{v\cdotot n(x^1)}
\end{bmatrix},\\
\nablala_x (R_{x^1(x,v)}^2) = \nablala_x \begin{equation}gin{bmatrix}
-2n_1n_2 \\ 1-2n_2^2
\end{bmatrix}=
| 2,937 | 129,481 |
en
|
train
|
0.4976.23
|
\begin{equation}gin{bmatrix}
\mathrm{d}frac{-4v_2n_1n_2}{v\cdotot n(x^1)} & \mathrm{d}frac{4v_1n_1n_2}{v\cdotot n(x^1)} \\
\mathrm{d}frac{-2v_2(n_2^2-n_1^2)}{v\cdotot n(x^1)} & \mathrm{d}frac{2v_1(n_2^2-n_1^2)}{v\cdotot n(x^1)}
\end{bmatrix},\\
\nablala_x (R_{x^1(x,v)}^2) = \nablala_x \begin{equation}gin{bmatrix}
-2n_1n_2 \\ 1-2n_2^2
\end{bmatrix}=
\begin{equation}gin{bmatrix}
\mathrm{d}frac{-2v_2(n_2^2-n_1^2)}{v\cdotot n(x^1)} & \mathrm{d}frac{2v_1(n_2^2-n_1^2)}{v\cdotot n(x^1)}\\
\mathrm{d}frac{4v_2n_1n_2}{v\cdotot n(x^1)} & \mathrm{d}frac{-4v_1n_1n_2}{v\cdotot n(x^1)}
\end{bmatrix}.
\end{align*}
Next, we calculate the $v$-derivative of $[-2A_{v,x^1}^1]$:
\begin{equation}gin{align*}
(\nablala_v (-2A_{v,x^1}^1))_{(1,1)} &= -\frac{2v_2n_1n_2 (v \cdotot n(x^1))-2v_1v_2n_1^2n_2}{(v\cdotot n(x^1))^2}-\frac{2v_2^2n_1^3}{(v\cdotot n(x^1))^2}=-\frac{2v_2^2n_1}{(v\cdotot n(x^1))^2}, \\
(\nablala_v (-2A_{v,x^1}^1))_{(1,2)} &=-2n_2-\frac{2v_1n_1n_2(v\cdotot n(x^1))-2v_1v_2n_1n_2^2}{(v\cdotot n(x^1))^2} +\frac{4v_2n_1^2 (v\cdotot n(x^1)) -2v_2^2n_1^2n_2}{(v\cdotot n(x^1))^2}\\
&= -2n_2-\mathrm{d}frac{2v_1^2n_1^2n_2}{(v\cdotot n(x^1))^2} + \mathrm{d}frac{4v_1v_2n_1^3}{(v\cdotot n(x^1))^2} + \mathrm{d}frac{2v_2^2n_1^2n_2}{(v\cdotot n(x^1))^2},\\
(\nablala_v (-2A_{v,x^1}^1))_{(2,1)}&=-\frac{2v_2n_2^2 (v\cdotot n(x^1)) -2v_1v_2n_1n_2^2}{(v\cdotot n(x^1))^2}-\frac{2v_2^2n_1^2n_2}{(v\cdotot n(x^1))^2}=-\frac{2v_2^2n_2}{(v\cdotot n(x^1))^2},\\
(\nablala_v (-2A_{v,x^1}^1))_{(2,2)}&=2n_1-\frac{2v_1n_2^2(v \cdotot n(x^1))-2v_1v_2n_2^3}{(v\cdotot n(x^1))^2}+\frac{4v_2n_1n_2 (v\cdotot n(x^1)) -2v_2^2n_1n_2^2}{(v\cdotot n(x^1))^2}\\
&= 2n_1 -\mathrm{d}frac{2v_1^2 n_1n_2^2}{(v\cdotot n(x^1))^2} + \mathrm{d}frac{4v_1v_2 n_1^2n_2}{(v\cdotot n(x^1))^2} + \mathrm{d}frac{2v_2^2 n_1n_2^2}{(v\cdotot n(x^1))^2}.
\end{align*}
Similarly, we deduce the $v$-derivative of $[-2A_{v,x^1}^2]$. We derived $\nablala_x(R_{x^1(x,v)}^1)$ and $\nablala_x(R_{x^1(x,v)}^2)$, and then \eqref{prop d_R} follows from direct calculation that
\begin{equation}gin{align*}
\nablala_{x}(R_{x^1(x,v)}^1) v = \begin{equation}gin{bmatrix}
\mathrm{d}frac{-4v_2n_1n_2}{v\cdotot n(x^1)} & \mathrm{d}frac{4v_1n_1n_2}{v\cdotot n(x^1)} \\
\mathrm{d}frac{-2v_2(n_2^2-n_1^2)}{v\cdotot n(x^1)} & \mathrm{d}frac{2v_1(n_2^2-n_1^2)}{v\cdotot n(x^1)}
\end{bmatrix}\begin{equation}gin{bmatrix}
v_1 \\ v_2
\end{bmatrix} = 0, \\
\nablala_x(R_{x^1(x,v)}^2) v =\begin{equation}gin{bmatrix}
\mathrm{d}frac{-2v_2(n_2^2-n_1^2)}{v\cdotot n(x^1)} & \mathrm{d}frac{2v_1(n_2^2-n_1^2)}{v\cdotot n(x^1)}\\
\mathrm{d}frac{4v_2n_1n_2}{v\cdotot n(x^1)} & \mathrm{d}frac{-4v_1n_1n_2}{v\cdotot n(x^1)}
\end{bmatrix} \begin{equation}gin{bmatrix}
v_1 \\ v_2
\end{bmatrix}=0.
\end{align*}
\end{proof}
| 1,725 | 129,481 |
en
|
train
|
0.4976.24
|
Back to the point, we find the condition of $\nablala_v f_0(x^1,Rv)$ satisfying \eqref{T invariant}. Since
\begin{equation}gin{align*}
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, Rv) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, Rv) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix}^{T}=\begin{equation}gin{bmatrix}
\nablala_vf_0(x^1,Rv) \mathrm{d}frac{\partialartial}{\partialartial x_1} (R_{x^1(x,v)}^1) & \nablala_vf_0(x^1,Rv) \mathrm{d}frac{\partialartial}{\partialartial x_1} (R_{x^1(x,v)}^2)\\
\nablala_vf_0(x^1,Rv) \mathrm{d}frac{\partialartial}{\partialartial x_2} (R_{x^1(x,v)}^1) & \nablala_vf_0(x^1,Rv) \mathrm{d}frac{\partialartial}{\partialartial x_2} (R_{x^1(x,v)}^2)
\end{bmatrix}, \\
-2
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, Rv) \nablala_{v}A^{1}_{v,x^{1}}
\\
\nablala_{v}f_{0}(x^{1}, Rv) \nablala_{v}A^{2}_{v,x^{1}} .
\end{bmatrix}=\begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \mathrm{d}frac{\partialartial}{\partialartial v_1}(-2A^1_{v,x^1}) & \nablala_v f_0(x^1,R_{x^1}v) \mathrm{d}frac{\partialartial}{\partialartial v_2}(-2A^1_{v,x^1})\\
\nablala_v f_0(x^1,R_{x^1}v) \mathrm{d}frac{\partialartial}{\partialartial v_1}(-2A^2_{v,x^1}) & \nablala_v f_0(x^1,R_{x^1}v) \mathrm{d}frac{\partialartial}{\partialartial v_2}(-2A^2_{v,x^1})
\end{bmatrix},
\end{align*}
it suffices to find the condition of $\nablala_v f_0(x^1,R_{x^1}v)$ such that
\begin{equation}gin{align*}
\nablala_vf_0(x^1,R_{x^1}v) \left( \mathrm{d}frac{\partialartial}{\partialartial x_1} (R_{x^1(x,v)}^1) -\mathrm{d}frac{\partialartial}{\partialartial v_1} (-2A_{v,x^1}^1)\right) =0, \quad
\nablala_vf_0(x^1,R_{x^1}v) \left( \mathrm{d}frac{\partialartial}{\partialartial x_2} (R_{x^1(x,v)}^1) -\mathrm{d}frac{\partialartial}{\partialartial v_1} (-2A_{v,x^1}^2)\right) =0,\\
\nablala_vf_0(x^1,R_{x^1}v) \left( \mathrm{d}frac{\partialartial}{\partialartial x_1} (R_{x^1(x,v)}^2) -\mathrm{d}frac{\partialartial}{\partialartial v_2} (-2A_{v,x^1}^1)\right) =0,\quad
\nablala_vf_0(x^1,R_{x^1}v) \left( \mathrm{d}frac{\partialartial}{\partialartial x_2} (R_{x^1(x,v)}^2) -\mathrm{d}frac{\partialartial}{\partialartial v_2} (-2A_{v,x^1}^2)\right) =0.
\end{align*}
We denote column vectors
\begin{equation}gin{align*}
K_1 := \mathrm{d}frac{\partialartial}{\partialartial x_1} (R_{x^1(x,v)}^1) -\mathrm{d}frac{\partialartial}{\partialartial v_1} (-2A_{v,x^1}^1), \quad K_2:= \mathrm{d}frac{\partialartial}{\partialartial x_2} (R_{x^1(x,v)}^1) -\mathrm{d}frac{\partialartial}{\partialartial v_1} (-2A_{v,x^1}^2),\\
K_3 := \mathrm{d}frac{\partialartial}{\partialartial x_1} (R_{x^1(x,v)}^2) -\mathrm{d}frac{\partialartial}{\partialartial v_2} (-2A_{v,x^1}^1), \quad K_4:=\mathrm{d}frac{\partialartial}{\partialartial x_2} (R_{x^1(x,v)}^2) -\mathrm{d}frac{\partialartial}{\partialartial v_2} (-2A_{v,x^1}^2).
\end{align*}
To determine whether $\nablala_v f_0(x^1,R_{x^1}v)$ is a nonzero vector or not for \eqref{T invariant}, we need to calculate the following determinant
\begin{equation}gin{align*}
\mathrm{d}et \begin{equation}gin{bmatrix} \vert & \vert \\ K_i & K_j \\ \vert & \vert \end{bmatrix}, \quad 1\leq i < j \leq 4.
\end{align*}
If every determinant has a value of zero, then $\nablala_v f_0(x^1,R_{x^1}v)$ satisfying \eqref{T invariant} is not the zero vector. We now show that every determinant is $0$ and $\nablala_v f_0(x^1,R_{x^1}v)$ is parallel to a particular direction to satisfy \eqref{T invariant}. Using Lemma \e^{\frac 12}f{d_RA} and $\vert n(x^1) \vert = n_1^2 +n_2^2=1$,\\
\textrm{(Case 1)} $(K_1 \leftrightarrow K_4) $
\begin{equation}gin{align*}
\mathrm{d}et \begin{equation}gin{bmatrix} \vert & \vert \\ K_1 & K_4 \\ \vert & \vert \end{bmatrix}&= \left(\mathrm{d}frac{-2}{v\cdotot n(x^1)}\right)^2 \mathrm{d}et
\begin{equation}gin{bmatrix}
2v_2n_1n_2 -\mathrm{d}frac{v_2^2n_1}{v\cdotot n(x^1)} & v_1 (n_1^2-n_2^2)-\mathrm{d}frac{v_1^2n_1}{v\cdotot n(x^1)}\\
v_2(n_2^2-n_1^2)-\mathrm{d}frac{v_2^2n_2}{v\cdotot n(x^1)} & 2v_1n_1n_2 -\mathrm{d}frac{v_1^2n_2}{v\cdotot n(x^1)}
\end{bmatrix}\\
&= \left(\mathrm{d}frac{-2}{v\cdotot n(x^1)}\right)^2\left[\left( 4v_1v_2n_1^2n_2^2-\frac{2v_1^2v_2n_1n_2^2}{v\cdotot n(x^1)} -\frac{2v_1v_2^2n_1^2n_2}{v\cdotot n(x^1)} +\mathrm{d}frac{v_1^2v_2^2n_1n_2}{(v\cdotot n(x^1))^2}\right) \right. \\
&\quad \left. -\left(-v_1v_2(n_2^2-n_1^2)^2-\frac{v_1v_2^2n_2(n_1^2-n_2^2)}{v\cdotot n(x^1)} -\frac{v_1^2v_2n_1(n_2^2-n_1^2)}{v\cdotot n(x^1)}+\frac{v_1^2v_2^2n_1n_2}{(v\cdotot n(x^1))^2}\right) \right]\\
\hide
&=\left(\mathrm{d}frac{-2}{v\cdotot n(x^1)}\right)^2\left[\left( 4v_1v_2n_1^2n_2^2 +v_1v_2(n_2^2-n_1^2)^2\right)+\left(-\frac{2v_1^2v_2n_1n_2^2}{v\cdotot n(x^1)}+\frac{v_1^2v_2n_1(n_2^2-n_1^2)}{v \cdotot n(x^1)}\right) \right. \\
&\quad \left.+\left(\frac{v_1v_2^2n_2(n_1^2-n_2^2)}{v\cdotot n(x^1)} -\frac{2v_1v_2^2n_1^2n_2}{v\cdotot n(x^1)} \right) \right]\\
\unhide
&=\left(\frac{-2}{v\cdotot n(x^1)}\right)^2\left( v_1v_2 -\frac{v_1^2v_2 n_1}{ v\cdotot n(x^1)} -\frac{v_1v_2^2n_2}{v\cdotot n(x^1)}\right)\\
&=0,
\end{align*}
\textrm{(Case 2)} $(K_1 \leftrightarrow K_2)$
\begin{equation}gin{align*}
\mathrm{d}et \begin{equation}gin{bmatrix} \vert & \vert \\ K_1 & K_2 \\ \vert & \vert \end{bmatrix}&=\left(\frac{-2}{v\cdotot n(x^1)}\right)^2 \mathrm{d}et
\begin{equation}gin{bmatrix}
2v_2n_1n_2 -\mathrm{d}frac{v_2^2n_1}{v\cdotot n(x^1)} & -v_1n_1n_2+v_2n_2^2 -\mathrm{d}frac{(v_2^2-v_1^2)n_1^2n_2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_1v_2n_1n_2^2}{v\cdotot n(x^1)} \\
v_2(n_2^2-n_1^2)-\mathrm{d}frac{v_2^2n_2}{v\cdotot n(x^1)} & -v_1n_2^2 -v_2n_1n_2 -\mathrm{d}frac{(v_2^2-v_1^2)n_1n_2^2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_1v_2n_2^3}{v \cdotot n(x^1)}
\end{bmatrix}\\
&= \left(\mathrm{d}frac{-2}{v\cdotot n(x^1)}\right)^2 \left[ \left(-2v_1v_2 n_1 n_2^3 -2v_2^2 n_1^2 n_2^2 -\mathrm{d}frac{2(v_2^2-v_1^2)v_2n_1^2n_2^3}{v\cdotot n(x^1)}+\mathrm{d}frac{4v_1v_2^2n_1n_2^4}{ v \cdotot n(x^1)} \right. \right. \\
&\quad+ \left. \left.\mathrm{d}frac{v_1v_2^2n_1n_2^2}{v\cdotot n(x^1)} +\mathrm{d}frac{v_2^3n_1^2n_2}{v\cdotot n(x^1)} +\mathrm{d}frac{(v_2^2-v_1^2)v_2^2n_1^2n_2^2}{(v \cdotot n(x^1))^2} -\mathrm{d}frac{2v_1v_2^3n_1n_2^3}{(v\cdotot n(x^1))^2} \right) \right. \\
&\quad \left. -\left(-v_1v_2n_1n_2(n_2^2-n_1^2)+v_2^2n_2^2(n_2^2-n_1^2)-\mathrm{d}frac{(v_2^2-v_1^2)v_2n_1^2n_2(n_2^2-n_1^2)}{v\cdotot n(x^1)}+\mathrm{d}frac{2v_1v_2^2n_1n_2^2(n_2^2-n_1^2)}{v\cdotot n(x^1)} \right. \right.\\
&\quad \left. \left. +\mathrm{d}frac{v_1v_2^2n_1n_2^2}{v\cdotot n(x^1)} -\mathrm{d}frac{v_2^3n_2^3}{v\cdotot n(x^1)} +\mathrm{d}frac{(v_2^2-v_1^2)v_2^2n_1^2n_2^2}{(v\cdotot n(x^1))^2}-\mathrm{d}frac{2v_1v_2^3n_1n_2^3}{(v\cdotot n(x^1))^2} \right) \right] \\
\hide
&= \left(\mathrm{d}frac{-2}{v\cdotot n(x^1)}\right)^2 \left[ \left(-v_1v_2n_1n_2-v_2^2n_2^2\right)+\left(-\frac{(v_2^2-v_1^2)v_2n_1^2n_2}{v\cdotot n(x^1)}+\frac{2v_1v_2^2n_1n_2^2}{v\cdotot n(x^1)}+\frac{v_2^3n_2}{v \cdotot n(x^1)} \right) \right]\\
\unhide
&=\left(\mathrm{d}frac{-2}{v\cdotot n(x^1)}\right)^2\left[ -\frac{v_2^3n_2^3}{v\cdotot n(x^1)} -\frac{v_2^3n_1^2n_2}{v\cdotot n(x^1)} +\frac{v_2^3n_2}{v\cdotot n(x^1)}\right]\\
&=0,
\end{align*}
| 3,897 | 129,481 |
en
|
train
|
0.4976.25
|
\textrm{(Case 2)} $(K_1 \leftrightarrow K_2)$
\begin{equation}gin{align*}
\mathrm{d}et \begin{equation}gin{bmatrix} \vert & \vert \\ K_1 & K_2 \\ \vert & \vert \end{bmatrix}&=\left(\frac{-2}{v\cdotot n(x^1)}\right)^2 \mathrm{d}et
\begin{equation}gin{bmatrix}
2v_2n_1n_2 -\mathrm{d}frac{v_2^2n_1}{v\cdotot n(x^1)} & -v_1n_1n_2+v_2n_2^2 -\mathrm{d}frac{(v_2^2-v_1^2)n_1^2n_2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_1v_2n_1n_2^2}{v\cdotot n(x^1)} \\
v_2(n_2^2-n_1^2)-\mathrm{d}frac{v_2^2n_2}{v\cdotot n(x^1)} & -v_1n_2^2 -v_2n_1n_2 -\mathrm{d}frac{(v_2^2-v_1^2)n_1n_2^2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_1v_2n_2^3}{v \cdotot n(x^1)}
\end{bmatrix}\\
&= \left(\mathrm{d}frac{-2}{v\cdotot n(x^1)}\right)^2 \left[ \left(-2v_1v_2 n_1 n_2^3 -2v_2^2 n_1^2 n_2^2 -\mathrm{d}frac{2(v_2^2-v_1^2)v_2n_1^2n_2^3}{v\cdotot n(x^1)}+\mathrm{d}frac{4v_1v_2^2n_1n_2^4}{ v \cdotot n(x^1)} \right. \right. \\
&\quad+ \left. \left.\mathrm{d}frac{v_1v_2^2n_1n_2^2}{v\cdotot n(x^1)} +\mathrm{d}frac{v_2^3n_1^2n_2}{v\cdotot n(x^1)} +\mathrm{d}frac{(v_2^2-v_1^2)v_2^2n_1^2n_2^2}{(v \cdotot n(x^1))^2} -\mathrm{d}frac{2v_1v_2^3n_1n_2^3}{(v\cdotot n(x^1))^2} \right) \right. \\
&\quad \left. -\left(-v_1v_2n_1n_2(n_2^2-n_1^2)+v_2^2n_2^2(n_2^2-n_1^2)-\mathrm{d}frac{(v_2^2-v_1^2)v_2n_1^2n_2(n_2^2-n_1^2)}{v\cdotot n(x^1)}+\mathrm{d}frac{2v_1v_2^2n_1n_2^2(n_2^2-n_1^2)}{v\cdotot n(x^1)} \right. \right.\\
&\quad \left. \left. +\mathrm{d}frac{v_1v_2^2n_1n_2^2}{v\cdotot n(x^1)} -\mathrm{d}frac{v_2^3n_2^3}{v\cdotot n(x^1)} +\mathrm{d}frac{(v_2^2-v_1^2)v_2^2n_1^2n_2^2}{(v\cdotot n(x^1))^2}-\mathrm{d}frac{2v_1v_2^3n_1n_2^3}{(v\cdotot n(x^1))^2} \right) \right] \\
\hide
&= \left(\mathrm{d}frac{-2}{v\cdotot n(x^1)}\right)^2 \left[ \left(-v_1v_2n_1n_2-v_2^2n_2^2\right)+\left(-\frac{(v_2^2-v_1^2)v_2n_1^2n_2}{v\cdotot n(x^1)}+\frac{2v_1v_2^2n_1n_2^2}{v\cdotot n(x^1)}+\frac{v_2^3n_2}{v \cdotot n(x^1)} \right) \right]\\
\unhide
&=\left(\mathrm{d}frac{-2}{v\cdotot n(x^1)}\right)^2\left[ -\frac{v_2^3n_2^3}{v\cdotot n(x^1)} -\frac{v_2^3n_1^2n_2}{v\cdotot n(x^1)} +\frac{v_2^3n_2}{v\cdotot n(x^1)}\right]\\
&=0,
\end{align*}
\textrm{(Case 3)} $(K_1 \leftrightarrow K_3)$
\begin{equation}gin{align*}
\mathrm{d}et \begin{equation}gin{bmatrix} \vert & \vert \\ K_1 & K_3 \\ \vert & \vert \end{bmatrix}&=\left(\frac{-2}{v\cdotot n(x^1)}\right)^2 \mathrm{d}et
\begin{equation}gin{bmatrix}
2v_2n_1n_2 -\mathrm{d}frac{v_2^2n_1}{v\cdotot n(x^1)} &-v_2n_1^2 -v_1n_1n_2-\mathrm{d}frac{(v_1^2-v_2^2)n_1^2n_2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_1v_2n_1^3}{v\cdotot n(x^1)}\\
v_2(n_2^2-n_1^2)-\mathrm{d}frac{v_2^2n_2}{v\cdotot n(x^1)} & v_1n_1^2 -v_2n_1n_2 -\mathrm{d}frac{(v_1^2-v_2^2)n_1n_2^2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_1v_2n_1^2n_2}{v\cdotot n(x^1)}
\end{bmatrix}\\
&= \left(\mathrm{d}frac{-2}{v\cdotot n(x^1)}\right)^2 \left[ \left(2v_1v_2 n_1^3 n_2 -2v_2^2 n_1^2 n_2^2 -\mathrm{d}frac{2(v_1^2-v_2^2)v_2n_1^2n_2^3}{v\cdotot n(x^1)}+\mathrm{d}frac{4v_1v_2^2n_1^3n_2^2}{ v \cdotot n(x^1)} \right. \right. \\
&\quad- \left. \left.\mathrm{d}frac{v_1v_2^2n_1^3}{v\cdotot n(x^1)} +\mathrm{d}frac{v_2^3n_1^2n_2}{v\cdotot n(x^1)} +\mathrm{d}frac{(v_1^2-v_2^2)v_2^2n_1^2n_2^2}{(v \cdotot n(x^1))^2} -\mathrm{d}frac{2v_1v_2^3n_1^3n_2}{(v\cdotot n(x^1))^2} \right) \right. \\
&\quad \left. -\left(-v_2^2n_1^2(n_2^2-n_1^2)-v_1v_2n_1n_2(n_2^2-n_1^2)-\mathrm{d}frac{(v_1^2-v_2^2)v_2n_1^2n_2(n_2^2-n_1^2)}{v\cdotot n(x^1)}+\mathrm{d}frac{2v_1v_2^2n_1^3(n_2^2-n_1^2)}{v\cdotot n(x^1)} \right. \right.\\
&\quad \left. \left. +\mathrm{d}frac{v_2^3n_1^2n_2}{v\cdotot n(x^1)} +\mathrm{d}frac{v_1v_2^2n_1n_2^2}{v\cdotot n(x^1)} +\mathrm{d}frac{(v_1^2-v_2^2)v_2^2n_1^2n_2^2}{(v\cdotot n(x^1))^2}-\mathrm{d}frac{2v_1v_2^3n_1^3n_2}{(v\cdotot n(x^1))^2} \right) \right]\\
\hide
&= \left(\mathrm{d}frac{-2}{v\cdotot n(x^1)}\right)^2 \left[ \left(v_1v_2n_1n_2-v_2^2n_1^2\right)+\left(-\frac{(v_1^2-v_2^2)v_2n_1^2n_2}{v\cdotot n(x^1)} +\frac{2v_1v_2^2n_1^3}{v \cdotot n(x^1)} -\frac{v_1v_2^2n_1}{v\cdotot n(x^1)}\right)\right]\\
\unhide
&= \left(\mathrm{d}frac{-2}{v\cdotot n(x^1)}\right)^2\left[ \frac{v_1v_2^2n_1^3}{v\cdotot n(x^1)} +\frac{v_1v_2^2n_1n_2^2}{v \cdotot n(x^1)} -\frac{v_1v_2^2n_1}{v\cdotot n(x^1)}\right]\\
&=0.
\end{align*}
Moreover, from (Case 1) and (Case 2), we deduce
\begin{equation}gin{align*}
\mathrm{d}et \begin{equation}gin{bmatrix} \vert & \vert \\ K_2 & K_4 \\ \vert & \vert \end{bmatrix}=0.
\end{align*}
Likewise, it holds that
\begin{equation}gin{align*}
\mathrm{d}et \begin{equation}gin{bmatrix} \vert & \vert \\ K_2 & K_3 \\ \vert & \vert \end{bmatrix}=0, \quad \mathrm{d}et \begin{equation}gin{bmatrix} \vert & \vert \\ K_3 & K_4 \\ \vert & \vert \end{bmatrix}=0.
\end{align*}
Therefore, it means that we can find a nonzero vector $\nablala_v f_0(x^1,R_{x^1}v)$ satisfying \eqref{T invariant}. Since
\begin{equation}gin{align*}
\nablala_v f_0(x^1,R_{x^1}v) \begin{equation}gin{bmatrix} \vert \\ K_1 \\ \vert \end{bmatrix} = 0,
\end{align*}
$\nablala _v f_0 (x^1,R_{x^1}v)$ is orthogonal to the column vector $K_1$. More specifically, $\nablala_v f_0(x^1,R_{x^1}v)^T$ has the following direction
\begin{equation}gin{align*}
\frac{-2}{v\cdotot n(x^1)} \begin{equation}gin{bmatrix}
-v_2(n_2^2-n_1^2) + \mathrm{d}frac{v_2^2 n_2}{v\cdotot n(x^1)} \\
2v_2n_1n_2-\mathrm{d}frac{v_2^2n_1}{v\cdotot n(x^1)}
\end{bmatrix}&=\frac{-2}{(v\cdotot n(x^1))^2}
\begin{equation}gin{bmatrix}
-v_1v_2n_1(n_2^2-n_1^2) +2v_2^2n_1^2n_2\\
2v_1v_2n_1^2n_2 +v_2^2n_1(n_2^2-n_1^2)
\end{bmatrix}\\
&=\frac{2v_2n_1}{(v\cdotot n(x^1))^2} \begin{equation}gin{bmatrix}
n_2^2-n_1^2 & -2n_1n_2\\
-2n_1n_2 & n_1^2 -n_2^2
\end{bmatrix} \begin{equation}gin{bmatrix}
v_1 \\ v_2
\end{bmatrix}= \frac{2v_2n_1}{(v\cdotot n(x^1))^2} R_{x^1}v.
\end{align*}
Consequently, for \eqref{T invariant}, we get the following condition
\begin{equation}gin{align} \label{Cond3}
\nablala _v f_0(x,R_xv) \partialarallel (R_xv)^T,
\end{align}
for any $x \in \partialartial \Omega$. \\
| 3,475 | 129,481 |
en
|
train
|
0.4976.26
|
\subsubsection{$\nablala_{xx}^T =\nablala_{xx}$} From \eqref{Cond2 2}, we need
\begin{equation}gin{equation*}
\begin{equation}gin{split}
&\left(\begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix} +
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^2)
\end{bmatrix}\right)^T
\\
&= \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix} +
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^2)
\end{bmatrix}.
\end{split}
\end{equation*}
Thus, it suffices to check that
\begin{equation}gin{align*}
&\nablala_x f_0(x^1,R_{x^1}v) \frac{\partialartial}{\partialartial x_2}(R_{x^1(x,v)}^1) +\nablala_v f_0(x^1,R_{x^1}v) \frac{\partialartial}{\partialartial x_2} (-2A_{v,x^1(x,v)}^1)\\
&= \nablala_x f_0(x^1,R_{x^1}v) \frac{\partialartial}{\partialartial x_1}(R_{x^1(x,v)}^2) +\nablala_v f_0(x^1,R_{x^1}v) \frac{\partialartial}{\partialartial x_1} (-2A_{v,x^1(x,v)}^2).
\end{align*}
In other words, we have to find the condition of $\nablala_x f_0 (x^1,R_{x^1}v)$ to satisfy
\begin{equation}gin{align}\label{xx_sym2}
\nablala_xf_0(x^1,R_{x^1}v) \left[\frac{\partialartial}{\partialartial x_2}(R_{x^1(x,v)}^1)-\frac{\partialartial}{\partialartial x_1}(R_{x^1(x,v)}^2) \right] = \nablala_v f_0(x^1,R_{x^1}v) \left[ \frac{\partialartial}{\partialartial x_1} (-2A_{v,x^1(x,v)}^2)-\frac{\partialartial}{\partialartial x_2} (-2A_{v,x^1(x,v)}^1)\right].
\end{align}
Since we computed $\nablala_x (R_{x^1(x,v)}^1), \nablala_x (R_{x^1(x,v)}^2)$ in Lemma \e^{\frac 12}f{d_RA}, we represent $\nablala_x (-2A_{v,x^1(x,v)}^1)$ and $\nablala_x (-2A_{v,x^1(x,v)}^2)$ by components.
\begin{equation}gin{lemma} \label{dx_A} Recall the matrix $A_{v,x}$ defined in \eqref{def A}, and then
\begin{equation}gin{equation*}
A_{v,x^1} = \left[ \left((v\cdotot n(x^1))I +(n(x^1)\otimes v)\right) \left(I-\frac{v\otimes n(x^1)}{v\cdotot n(x^1)}\right)\right].
\end{equation*}
If we write that $A^i$ is the $i$th column of matrix $A$, then
\begin{equation}gin{align*}
&\nablala_x(-2A_{v,x^1(x,v)}^1) \\
&= \begin{equation}gin{bmatrix}
\mathrm{d}frac{4v_1^2v_2^2n_1^3 +2v_1v_2^3(3n_1^2n_2-n_2^3)+ 2v_2^4(3n_1n_2^2+n_1^3)}{(v\cdotot n(x^1))^3} & \mathrm{d}frac{-4v_1^3v_2n_1^3-2v_1^2v_2^2(3n_1^2n_2-n_2^3)-2v_1v_2^3(3n_1n_2^2+n_1^3)}{(v\cdotot n(x^1))^3}\\
\mathrm{d}frac{4v_2^4n_2^3+2v_1v_2^3(3n_1n_2^2-n_1^3)+2v_1^2v_2^2(3n_1^2n_2+n_2^3)}{(v\cdotot n(x^1))^3} & \mathrm{d}frac{-4v_1v_2^3n_2^3-2v_1^2v_2^2(3n_1n_2^2-n_1^3)-2v_1^3v_2(3n_1^2n_2+n_2^3)}{(v\cdotot n(x^1))^3}
\end{bmatrix},\\
&\nablala_x(-2A_{v,x^1(x,v)}^2)\\
&= \begin{equation}gin{bmatrix}
\mathrm{d}frac{-4v_1^3v_2n_1^3-2v_1v_2^3(3n_1n_2^2+n_1^3) -2v_1^2v_2^2(3n_1^2n_2-n_2^3)}{(v\cdotot n(x^1))^3} & \mathrm{d}frac{4v_1^4n_1^3 +2v_1^2v_2^2(3n_1n_2^2+n_1^3)+2v_1^3v_2 (3n_1^2n_2-n_2^3)}{(v \cdotot n(x^1))^3}\\
\mathrm{d}frac{-4v_1v_2^3n_2^3 -2v_1^3v_2(3n_1^2n_2+n_2^3) -2v_1^2v_2^2(3n_1n_2^2-n_1^3)}{(v\cdotot n(x^1))^3} & \mathrm{d}frac{4v_1^2 v_2^2 n_2^3 +2v_1^4(3n_1^2n_2+n_2^3)+2v_1^3v_2(3n_1n_2^2-n_1^3)}{(v \cdotot n(x^1))^3}
\end{bmatrix},
\end{align*}
where $v_i$ be the $i$th component of $v$. We denote the $i$th component $n_i(x,v)$ of $n(x^1)$ as $n_i$, that is, $n_i$ depends on $x,v$. Furthermore, it holds that
\begin{equation}gin{equation} \label{prop d_A}
\nablala_x(-2A_{v,x^1(x,v)}^1)v =0, \quad \nablala_x (-2A_{v,x^1(x,v)}^2)v=0.
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
We write the matrix $-2A_{v,x^1}$ by components:
\begin{equation}gin{align*}
-2A_{v,x^1}=\begin{equation}gin{bmatrix}
-2v_2n_2 -\mathrm{d}frac{2v_1v_2n_1n_2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_2^2 n_1^2}{ v\cdotot n(x^1)} & 2v_1n_2 + \mathrm{d}frac{2v_1^2n_1n_2}{v\cdotot n(x^1)} -\mathrm{d}frac{2v_1v_2n_1^2}{ v\cdotot n(x^1)} \\
2v_2n_1 -\mathrm{d}frac{2v_1v_2n_2^2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_2^2n_1n_2}{v\cdotot n(x^1)} & -2v_1n_1 -\mathrm{d}frac{2v_1v_2n_1n_2}{v\cdotot n(x^1)} + \mathrm{d}frac{2v_1^2n_2^2}{v \cdotot n(x^1)}
\end{bmatrix}.
\end{align*}
For $\nablala_x(-2A_{v,x^1(x,v)}^1)$, we firstly take a derivative of $(1,1)$ component of $-2A_{v,x^1}$ with respect to $x_1$
\begin{equation}gin{align*}
&\frac{\partialartial}{\partialartial x_1} \left(-2v_2n_2 -\mathrm{d}frac{2v_1v_2n_1n_2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_2^2 n_1^2}{ v\cdotot n(x^1)} \right)\\
&=-2v_2 \frac{\partialartial n_2}{\partialartial x_1} + \mathrm{d}frac{\left (-2v_1v_2 n_2\frac{\partialartial n_1}{\partialartial x_1} - 2 v_1v_2n_1 \frac{\partialartial n_2}{\partialartial x_1}\right)(v\cdotot n(x^1))+2v_1v_2n_1n_2\left(v_1\frac{\partialartial n_1}{\partialartial x_1}+v_2\frac{\partialartial n_2}{\partialartial x_1}\right)}{(v\cdotot n(x^1))^2}\\
& \quad+ \mathrm{d}frac{\left(4v_2^2n_1\frac{\partialartial n_1}{\partialartial x_1}\right)(v\cdotot n(x^1))-2v_2^2n_1^2\left(v_1\frac{\partialartial n_1}{\partialartial x_1}+v_2\frac{\partialartial n_2}{\partialartial x_1}\right)}{(v \cdotot n(x^1))^2}\\
\hide
&=\frac{4v_1v_2^2n_1^2-2v_1v_2^2n_2^2+6v_2^3n_1n_2}{(v\cdotot n(x^1))^2} +\frac{2v_1^2v_2^2n_1n_2^2-4v_1v_2^3n_1^2n_2+2v_2^4n_1^3}{(v\cdotot n(x^1))^3}\\
\unhide
&=\mathrm{d}frac{4v_1^2v_2^2n_1^3 +2v_1v_2^3(3n_1^2n_2-n_2^3)+ 2v_2^4(3n_1n_2^2+n_1^3)}{(v\cdotot n(x^1))^3},
\end{align*}
where we used \eqref{normal} in Lemma \e^{\frac 12}f{d_n}. Similarly, using \eqref{normal} in Lemma \e^{\frac 12}f{d_n}, we get
\begin{equation}gin{align*}
&\frac{\partialartial}{\partialartial x_2} \left(-2v_2n_2 -\mathrm{d}frac{2v_1v_2n_1n_2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_2^2 n_1^2}{ v\cdotot n(x^1)} \right)\\
&=-2v_2 \frac{\partialartial n_2}{\partialartial x_2} + \mathrm{d}frac{\left (-2v_1v_2 n_2\frac{\partialartial n_1}{\partialartial x_2} - 2 v_1v_2n_1 \frac{\partialartial n_2}{\partialartial x_2}\right)(v\cdotot n(x^1))+2v_1v_2n_1n_2\left(v_1\frac{\partialartial n_1}{\partialartial x_2}+v_2\frac{\partialartial n_2}{\partialartial x_2}\right)}{(v\cdotot n(x^1))^2}\\
& \quad+ \frac{\left(4v_2^2n_1\frac{\partialartial n_1}{\partialartial x_2}\right)(v\cdotot n(x^1))-2v_2^2n_1^2\left(v_1\frac{\partialartial n_1}{\partialartial x_2}+v_2\frac{\partialartial n_2}{\partialartial x_2}\right)}{(v \cdotot n(x^1))^2}\\
| 3,748 | 129,481 |
en
|
train
|
0.4976.27
|
\hide
&=\frac{4v_1v_2^2n_1^2-2v_1v_2^2n_2^2+6v_2^3n_1n_2}{(v\cdotot n(x^1))^2} +\frac{2v_1^2v_2^2n_1n_2^2-4v_1v_2^3n_1^2n_2+2v_2^4n_1^3}{(v\cdotot n(x^1))^3}\\
\unhide
&=\mathrm{d}frac{4v_1^2v_2^2n_1^3 +2v_1v_2^3(3n_1^2n_2-n_2^3)+ 2v_2^4(3n_1n_2^2+n_1^3)}{(v\cdotot n(x^1))^3},
\end{align*}
where we used \eqref{normal} in Lemma \e^{\frac 12}f{d_n}. Similarly, using \eqref{normal} in Lemma \e^{\frac 12}f{d_n}, we get
\begin{equation}gin{align*}
&\frac{\partialartial}{\partialartial x_2} \left(-2v_2n_2 -\mathrm{d}frac{2v_1v_2n_1n_2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_2^2 n_1^2}{ v\cdotot n(x^1)} \right)\\
&=-2v_2 \frac{\partialartial n_2}{\partialartial x_2} + \mathrm{d}frac{\left (-2v_1v_2 n_2\frac{\partialartial n_1}{\partialartial x_2} - 2 v_1v_2n_1 \frac{\partialartial n_2}{\partialartial x_2}\right)(v\cdotot n(x^1))+2v_1v_2n_1n_2\left(v_1\frac{\partialartial n_1}{\partialartial x_2}+v_2\frac{\partialartial n_2}{\partialartial x_2}\right)}{(v\cdotot n(x^1))^2}\\
& \quad+ \frac{\left(4v_2^2n_1\frac{\partialartial n_1}{\partialartial x_2}\right)(v\cdotot n(x^1))-2v_2^2n_1^2\left(v_1\frac{\partialartial n_1}{\partialartial x_2}+v_2\frac{\partialartial n_2}{\partialartial x_2}\right)}{(v \cdotot n(x^1))^2}\\
\hide
&=\frac{-4v_1^2v_2n_1^2-6v_1v_2^2n_1n_2+2v_1^2v_2n_2^2}{(v \cdotot n(x^1))^2} + \frac{4v_1^2v_2^2n_1^2n_2-2v_1^3v_2n_1n_2^2-2v_1v_2^3n_1^3}{(v\cdotot n(x^1))^3} \\
\unhide
&= \mathrm{d}frac{-4v_1^3v_2n_1^3-2v_1^2v_2^2(3n_1^2n_2-n_2^3)-2v_1v_2^3(3n_1n_2^2+n_1^3)}{(v\cdotot n(x^1))^3},\\
&\frac{\partialartial}{\partialartial x_1} \left(2v_2n_1 -\mathrm{d}frac{2v_1v_2n_2^2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_2^2n_1n_2}{v\cdotot n(x^1)}\right)\\
&=2v_2\frac{\partialartial n_1}{\partialartial x_1} - \frac{\left(4v_1v_2n_2\frac{\partialartial n_2}{\partialartial x_1}\right)(v\cdotot n(x^1))-2v_1v_2n_2^2\left(v_1\frac{\partialartial n_1}{\partialartial x_1}+v_2\frac{\partialartial n_2}{\partialartial x_1}\right) }{(v \cdotot n(x^1))^2}\\
&\quad+ \frac{\left( 2v_2^2n_2\frac{\partialartial n_1}{\partialartial x_1} +2v_2^2n_1\frac{\partialartial n_2}{\partialartial x_1}\right) (v\cdotot n(x^1))-2v_2^2n_1n_2\left( v_1 \frac{\partialartial n_1}{\partialartial x_1}+v_2 \frac{\partialartial n_2}{\partialartial x_1}\right)}{(v\cdotot n(x^1))^2}\\
\hide
&=\frac{6v_1v_2^2n_1n_2+4v_2^3n_2^2-2v_2^3n_1^2}{(v\cdotot n(x^1))^2} +\frac{2v_1^2v_2^2n_2^3-4v_1v_2^3n_1n_2^2+2v_2^4n_1^2n_2}{(v\cdotot n(x^1))^3}\\
\unhide
&=\mathrm{d}frac{4v_2^4n_2^3+2v_1v_2^3(3n_1n_2^2-n_1^3)+2v_1^2v_2^2(3n_1^2n_2+n_2^3)}{(v\cdotot n(x^1))^3},\\
&\frac{\partialartial}{\partialartial x_2} \left(2v_2n_1 -\mathrm{d}frac{2v_1v_2n_2^2}{v\cdotot n(x^1)} +\mathrm{d}frac{2v_2^2n_1n_2}{v\cdotot n(x^1)}\right)\\
&=2v_2\frac{\partialartial n_1}{\partialartial x_2} - \frac{\left(4v_1v_2n_2\frac{\partialartial n_2}{\partialartial x_2}\right)(v\cdotot n(x^1))-2v_1v_2n_2^2\left(v_1\frac{\partialartial n_1}{\partialartial x_2}+v_2\frac{\partialartial n_2}{\partialartial x_2}\right) }{(v \cdotot n(x^1))^2}\\
&\quad+ \frac{\left( 2v_2^2n_2\frac{\partialartial n_1}{\partialartial x_2} +2v_2^2n_1\frac{\partialartial n_2}{\partialartial x_2}\right) (v\cdotot n(x^1))-2v_2^2n_1n_2\left( v_1 \frac{\partialartial n_1}{\partialartial x_2}+v_2 \frac{\partialartial n_2}{\partialartial x_2}\right)}{(v\cdotot n(x^1))^2}\\
\hide
&\quad -\frac{2v_1v_2^2n_2^2}{(v\cdotot n(x^1))^2} +\frac{2v_1v_2^2n_1^2}{(v\cdotot n(x^1))^2} +\frac{2v_1^2v_2^2n_1n_2^2}{(v\cdotot n(x^1))^3} -\frac{2v_1v_2^3n_1^2n_2}{(v\cdotot n(x^1))^3}\\
\unhide
&=\mathrm{d}frac{-4v_1v_2^3n_2^3-2v_1^2v_2^2(3n_1n_2^2-n_1^3)-2v_1^3v_2(3n_1^2n_2+n_2^3)}{(v\cdotot n(x^1))^3}.
\end{align*}
Thus, we derived $\nablala_x(-2A_{v,x^1(x,v)}^1)$. Similar to $\nablala_x(-2A_{v,x^1(x,v)}^1)$, we can obtain $\nablala_x(-2A_{v,x^1(x,v)}^2)$, and the details are omitted. By the $\nablala_x(-2A_{v,x^1(x,v)}^1)$ and $\nablala_x(-2A_{v,x^1(x,v)}^2)$ formula above, direct calculation gives \eqref{prop d_A}:
\begin{equation}gin{footnotesize}
\begin{equation}gin{align*}
&\nablala_x(-2A_{v,x^1(x,v)}^1) v \\
&= \begin{equation}gin{bmatrix}
\mathrm{d}frac{4v_1^2v_2^2n_1^3 +2v_1v_2^3(3n_1^2n_2-n_2^3)+ 2v_2^4(3n_1n_2^2+n_1^3)}{(v\cdotot n(x^1))^3} & \mathrm{d}frac{-4v_1^3v_2n_1^3-2v_1^2v_2^2(3n_1^2n_2-n_2^3)-2v_1v_2^3(3n_1n_2^2+n_1^3)}{(v\cdotot n(x^1))^3}\\
\mathrm{d}frac{4v_2^4n_2^3+2v_1v_2^3(3n_1n_2^2-n_1^3)+2v_1^2v_2^2(3n_1^2n_2+n_2^3)}{(v\cdotot n(x^1))^3} & \mathrm{d}frac{-4v_1v_2^3n_2^3-2v_1^2v_2^2(3n_1n_2^2-n_1^3)-2v_1^3v_2(3n_1^2n_2+n_2^3)}{(v\cdotot n(x^1))^3}
\end{bmatrix} \begin{equation}gin{bmatrix}
v_1 \\ v_2
\end{bmatrix}=0,\\
&\nablala_x(-2A_{v,x^1(x,v)}^2) v \\
&=\begin{equation}gin{bmatrix}
\mathrm{d}frac{-4v_1^3v_2n_1^3-2v_1v_2^3(3n_1n_2^2+n_1^3) -2v_1^2v_2^2(3n_1^2n_2-n_2^3)}{(v\cdotot n(x^1))^3} & \mathrm{d}frac{4v_1^4n_1^3 +2v_1^2v_2^2(3n_1n_2^2+n_1^3)+2v_1^3v_2 (3n_1^2n_2-n_2^3)}{(v \cdotot n(x^1))^3}\\
\mathrm{d}frac{-4v_1v_2^3n_2^3 -2v_1^3v_2(3n_1^2n_2+n_2^3) -2v_1^2v_2^2(3n_1n_2^2-n_1^3)}{(v\cdotot n(x^1))^3} & \mathrm{d}frac{4v_1^2 v_2^2 n_2^3 +2v_1^4(3n_1^2n_2+n_2^3)+2v_1^3v_2(3n_1n_2^2-n_1^3)}{(v \cdotot n(x^1))^3}
\end{bmatrix} \begin{equation}gin{bmatrix}
v_1 \\ v_2
\end{bmatrix}=0.
\end{align*}
\end{footnotesize}
\end{proof}
| 3,233 | 129,481 |
en
|
train
|
0.4976.28
|
Now, back to our consideration \eqref{xx_sym2}. By Lemma \e^{\frac 12}f{dx_A}, we have
\begin{equation}gin{align*}
\frac{\partialartial}{\partialartial x_2} (-2A_{v,x^1(x,v)}^1)= \frac{\partialartial}{\partialartial x_1}(-2A_{v,x^1(x,v)}^2),
\end{align*}
which implies that
\begin{equation}gin{align*}
\nablala_xf_0(x^1,R_{x^1}v) \left[\frac{\partialartial}{\partialartial x_2}(R_{x^1(x,v)}^1)-\frac{\partialartial}{\partialartial x_1}(R_{x^1(x,v)}^2) \right]=\frac{2}{v\cdotot n(x^1)}\nablala_xf_0(x^1,R_{x^1}v) \begin{equation}gin{bmatrix}
2v_1n_1n_2 +v_2(n_2^2-n_1^2) \\
v_1(n_2^2-n_1^2)-2v_2n_1n_2
\end{bmatrix}=0.
\end{align*}
It means that $\nablala_x f_0(x^1,R_{x^1}v)$ is orthogonal to $\frac{\partialartial}{\partialartial x_2}(R_{x^1(x,v)}^1)-\frac{\partialartial}{\partialartial x_1}(R_{x^1(x,v)}^2)$ and $\nablala_xf_0(x^1,R_{x^1}v)^T$ has the following direction
\begin{equation}gin{align*}
\begin{equation}gin{bmatrix}
-v_1(n_2^2-n_1^2)+2v_2n_1n_2 \\ 2v_1n_1n_2+v_2(n_2^2-n_1^2)
\end{bmatrix}=-\begin{equation}gin{bmatrix}
n_2^2-n_1^2 & -2n_1n_2 \\ -2n_1n_2 & n_1^2-n_2^2
\end{bmatrix}\begin{equation}gin{bmatrix}
v_1 \\ v_2
\end{bmatrix}=-R_{x^1}v.
\end{align*}
To hold $\nablala_{xx} f_0(x^1,R_{x^1}v)^T = \nablala_{xx} f_0 (x^1,R_{x^1}v)$, the following condition
\begin{equation}gin{align} \label{Cond4}
\nablala_xf_0(x,R_xv) \partialarallel (R_xv)^T,
\end{align}
must be satisfied for $x \in \partialartial \Omega$. \\
| 745 | 129,481 |
en
|
train
|
0.4976.29
|
\subsection{Conditions including $\partial_{t}$}
In this subsection, we find conditions for $\partial_{tt}, \partial_{t}\nablala_{x}, \partial_{t}\nablala_{v}, \nablala_{x}\partial_{t}, \nablala_{v}\partial_{t}$. In the last subsubsection, we show that all these $\partial_{t}$ including compatibility conditions are covered by \eqref{Cond2 1}--\eqref{Cond2 4}, \eqref{Cond3}, and \eqref{Cond4}. \\
\subsubsection{$\partialartial_{tt}$} Using the same perturbation \eqref{Perb_t} in $C^1_t$ compatibility condition, we derive $C^2_t$ compatibility condition. For $\varepsilonsilon>0$,
\begin{equation}gin{align*}
\partialartial_t(f(t+\varepsilonsilon,x,v)-f(t,x,v))&= \partialartial_t (f_0(X^\varepsilonsilon(0),R_{x^1}v)-f_0(X(0),R_{x^1}v))\\
&=\left( \nablala_x f_0(X^\varepsilonsilon(0),R_{x^1}v)-\nablala_xf_0(X(0),R_{x^1}v)\right) (-R_{x^1}v) \\
&=(-R_{x^1}v)^T \left (\nablala_x f_0(X^\varepsilonsilon(0),R_{x^1}v) -\nablala_xf_0(X(0),R_{x^1}v) \right)^T,
\end{align*}
which implies
\begin{equation}gin{align*}
f_{tt}(t,x,v) &= \lim_{\varepsilonsilon \rightarrow 0+}\frac{ \partialartial_t f(t+\varepsilonsilon,x,v)-\partialartial_t f(t,x,v)}{\varepsilonsilon}\\
&=(-R_{x^1}v)^T \nablala_{xx} f_0(x^1,R_{x^1}v) \lim_{\varepsilonsilon\rightarrow 0+}\frac{ X^\varepsilonsilon(0)-X(0)}{\varepsilonsilon}\\
&=(-R_{x^1}v)^T \nablala_{xx} f_0(x^1,R_{x^1}v) (-R_{x^1}v).
\end{align*}
On the other hand, for $\varepsilonsilon<0$, it holds that
\begin{equation}gin{align*}
\partialartial_t(f(t+\varepsilonsilon,x,v)-f(t,x,v))= \partialartial_t (f_0(X^\varepsilonsilon(0),v)-f_0(X(0),v))&=\left( \nablala_x f_0(X^\varepsilonsilon(0),v)-\nablala_xf_0(X(0),v)\right) (-v)\\
&=(-v)^T \left( \nablala_x f_0(X^\varepsilonsilon(0),v)-\nablala_xf_0(X(0),v)\right)^T.
\end{align*}
Thus, we have
\begin{equation}gin{align*}
f_{tt}(t,x,v) &= \lim_{\varepsilonsilon \rightarrow 0-}\frac{ \partialartial_t f(t+\varepsilonsilon,x,v)-\partialartial_t f(t,x,v)}{\varepsilonsilon}\\
&=(-v)^T \nablala_{xx} f_0(x^1,v) \lim_{\varepsilonsilon\rightarrow 0-}\frac{ X^\varepsilonsilon(0)-X(0)}{\varepsilonsilon}\\
&=(-v)^T \nablala_{xx} f_0(x^1,v) (-v).
\end{align*}
To sum up, the condition
\begin{equation}gin{align} \label{time cond}
v^T \nablala_{xx}f_0(x^1,v)v = (R_{x^1}v)^T \nablala_{xx}f_0(x^1,R_{x^1}v)(R_{x^1}v),
\end{align}
must be satisfied to $f \in C^2_t$. \\
| 1,106 | 129,481 |
en
|
train
|
0.4976.30
|
\subsubsection{$C^2_{t,x}$} We firstly use the perturbation \eqref{Perb_t} for $\varepsilonsilon <0$. From \eqref{c_3}, it holds that
\begin{equation}gin{equation} \label{nabla_tx f case1}
\begin{equation}gin{split}
\partialartial_t [\nablala_xf(t,x,v)]&= \lim_{\varepsilonsilon \rightarrow 0-} \frac{ \nablala_x f(t+\varepsilonsilon,x,v) - \nablala_xf(t,x,v)}{\varepsilonsilon}\\
&=\lim_{\varepsilonsilon \rightarrow 0-} \frac{1}{\varepsilonsilon} \left( \nablala_x \left[ f_0(X(0;t+\varepsilonsilon,x,v),V(0;t+\varepsilonsilon,x,v))\right]-\nablala_xf_0(X(0),v)\right)\\
&=\lim_{\varepsilonsilon \rightarrow 0-} \frac{1}{\varepsilonsilon} \left( \nablala_x f_0(X^\varepsilonsilon(0),v)-\nablala_x f_0(X(0),v)\right)\\
&=-v^T \nablala_{xx} f_0(x^1,v),
\end{split}
\end{equation}
where we used $\nablala_x X^{\varepsilonsilon}(0) = I_2$ and $\nablala_x V^{\varepsilonsilon}(0)=0$. On the other hand, for $\varepsilonsilon>0$,
\begin{equation}gin{align*}
X^{\varepsilonsilon}(0):= X(0;t+\varepsilonsilon,x,v)=X(0;t,x-\varepsilonsilon v, v), \quad V^{\varepsilonsilon}(0):=V(0;t+\varepsilonsilon,x,v)=R_{x^1}v.
\end{align*}
Similar to previous case $\nablala_{xx}$, using \eqref{nabla XV_x-} and \eqref{c_4},
\begin{equation}gin{align*}
\partialartial_t [\nablala_xf(t,x,v)]&= \lim_{\varepsilonsilon \rightarrow 0+} \frac{ \nablala_x f(t+\varepsilonsilon,x,v) - \nablala_xf(t,x,v)}{\varepsilonsilon}\\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_x \left[ f_0(X(0;t+\varepsilonsilon,x,v),V(0;t+\varepsilonsilon,x,v))\right] \right. \\
&\left.\quad - \left(\nablala_x f_0(X(0),R_{x^1}v)R_{x^1} -2\nablala_v f_0(X(0),R_{x^1}v)A_{v,x^1} \right) \right)\\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_x f_0(X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0))\nablala_x X^{\varepsilonsilon}(0) +\nablala_v f_0(X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0)) \nablala_x V^{\varepsilonsilon}(0)\right. \\
&\quad \left. -\left( \nablala_x f_0(X(0),R_{x^1}v)R_{x^1} -2\nablala_v f_0(X(0),R_{x^1}v)A_{v,x^1}\right) \right)\\
&=\lim _{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_x f_0 (X^{\varepsilonsilon}(0),R_{x^1}v) \nablala_x X^{\varepsilonsilon}(0) -\nablala_x f_0(X(0),R_{x^1}v)R_{x^1}\right) \\
&\quad + \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_v f_0(X^{\varepsilonsilon}(0),R_{x^1}v) \nablala_xV^{\varepsilonsilon}(0) +2\nablala_v f_0(X(0),R_{x^1}v)A_{v,x^1}\right) \\
&:= I_{tx,1}+I_{tx,2},
\end{align*}
where
\begin{equation}gin{align*}
I_{tx,1}&:=\lim_{\varepsilonsilon \rightarrow 0+}\frac{1}{\varepsilonsilon} \left(\nablala_x f_0 (X^{\varepsilonsilon}(0),R_{x^1}v) \nablala_x X^{\varepsilonsilon}(0) - \nablala_xf_0(X^\varepsilonsilon(0),R_{x^1}v) \lim_{s\rightarrow 0-}\nablala_x X(s) \right. \\
&\quad \left. +\nablala_x f_0(X^\varepsilonsilon(0),R_{x^1}v) \lim_{s\rightarrow 0-} \nablala_x X(s) -\nablala_x f_0(X(0),R_{x^1}v)R_{x^1}\right)\\
&\stackrel{r\leftrightarrow c}{=} \left[\nablala_xf_0(x^1,R_{x^1}v)\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_x X^{\varepsilonsilon}(0)-\lim_{s \rightarrow 0-} \nablala_x X(s) \right)\right]^T\\
&\quad + R_{x^1}\nablala_{xx}f_0(x^1,R_{x^1}v)(-R_{x^1}v)\\
&=\begin{equation}gin{bmatrix}
\nablala_x f_0(x^1,R_{x^1}v)\nablala_x(R_{x^1(x,v)}^1) \\ \nablala_x f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^2)
\end{bmatrix} (-v)+R_{x^1}\nablala_{xx} f_0(x^1,R_{x^1}v) (-R_{x^1}v), \\
I_{tx,2}&:= \lim_{\varepsilonsilon \rightarrow 0+}\frac{1}{\varepsilonsilon} \left(\nablala_v f_0 (X^{\varepsilonsilon}(0),R_{x^1}v) \nablala_x V^{\varepsilonsilon}(0) - \nablala_vf_0(X^\varepsilonsilon(0),R_{x^1}v) \lim_{s\rightarrow 0-}\nablala_x V(s) \right. \\
&\quad \left. +\nablala_v f_0(X^\varepsilonsilon(0),R_{x^1}v) \lim_{s\rightarrow 0-} \nablala_x V(s) -2\nablala_v f_0(X(0),R_{x^1}v)A_{v,x^1}\right)\\
&\stackrel{r\leftrightarrow c}{=} \left[\nablala_vf_0(x^1,R_{x^1}v)\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_x V^{\varepsilonsilon}(0)-\lim_{s \rightarrow 0-} \nablala_x V(s) \right)\right]^T\\
&\quad +(-2A^T_{v,x^1})\nablala_{xv} f_0(x^1,R_{x^1}v)\lim_{\varepsilonsilon \rightarrow 0+} \frac{ X^{\varepsilonsilon}(0)-X(0)}{\varepsilonsilon}\\
&= \begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_x(-2A_{v,x^1(x,v)}^2)
\end{bmatrix} (-v)+ (-2A^T_{v,x^1}) \nablala_{xv}f_0(x^1,R_{x^1}v) (-R_{x^1}v).
\end{align*}
Thus,
\begin{equation}gin{equation}\label{nabla_tx f case2}
\begin{equation}gin{split}
\partialartial_t [\nablala_x f(t,x,v)] &= (-v)^T \begin{equation}gin{bmatrix}
\nablala_x f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^1) \\ \nablala_x f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^2)
\end{bmatrix}^T+(-v)^T \begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_x(-2A_{v,x^1(x,v)}^2)
\end{bmatrix}^T\\
& \quad +(-v^T)R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v)R_{x^1}+(-v^T) R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v)(-2A_{v,x^1}).
\end{split}
\end{equation}
From \eqref{nabla_tx f case1} and \eqref{nabla_tx f case2}, we have the following condition
\begin{equation}gin{equation} \label{tx comp}
\begin{equation}gin{split}
(-v^T) \nablala_{xx}f_0(x^1,v) &= (-v)^T \begin{equation}gin{bmatrix}
\nablala_x f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^1) \\ \nablala_x f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^2)
\end{bmatrix}^T+(-v)^T \begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_x(-2A_{v,x^1(x,v)}^2)
\end{bmatrix}^T\\
& \quad +(-v^T)R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v)R_{x^1}+(-v^T) R_{x^1} \nablala_{vx}f_0(x^1,R_{x^1}v) (-2A_{v,x^1}).
\end{split}
\end{equation}
| 3,161 | 129,481 |
en
|
train
|
0.4976.31
|
\subsubsection{$C^2_{t,v}$} Similar to $C^2_{t,x}$, we use \eqref{c_1} and the perturbation \eqref{Perb_t} for $\varepsilonsilon<0$ to obtain
\begin{equation}gin{equation}\label{nabla_tv f case1}
\begin{equation}gin{split}
\partialartial_{t}[\nablala_{v}f(t,x,v)] &= \lim_{\varepsilonsilon \rightarrow 0-} \frac{ \nablala_v f(t+\varepsilonsilon,x,v) -\nablala_v f(t,x,v)}{ \varepsilonsilon}\\
&= \lim_{\varepsilonsilon \rightarrow 0-} \frac{1}{\varepsilonsilon} \left( \nablala_v \left[ f_0(X(0;t+\varepsilonsilon,x,v),V(0;t+\varepsilonsilon,x,v))\right]-(-t\nablala_x f_0(X(0),v)+\nablala_vf_0(X(0),v)) \right)\\
&= \lim_{\varepsilonsilon \rightarrow 0-} \frac{1}{\varepsilonsilon} \left( -(t+\varepsilonsilon) \nablala_x f_0(X^{\varepsilonsilon}(0),v) +\nablala_v f_0(X^{\varepsilonsilon}(0),v) +t\nablala_x f_0(X(0),v) -\nablala_v f_0(X(0),v) \right)\\
&=-\nablala_x f_0(x^1,v) -t(-v^T) \nablala_{xx}f_0(x^1,v) + (-v^T) \nablala_{vx}f_0(x^1,v),
\end{split}
\end{equation}
where we have used $\nablala_v X^{\varepsilonsilon}(0) = -(t+\varepsilonsilon) I_2, \nablala_v V^{\varepsilonsilon}(0) = I_2$.
For $\varepsilonsilon>0$, the perturbation \eqref{Perb_t} becomes
\begin{equation}gin{equation*}
X^{\varepsilonsilon}(0):=X(0;t+\varepsilonsilon,x,v) =X(0;t,x-\varepsilonsilon v,v) =x^1 -(t^1+\varepsilonsilon)R_{x^1}v, \quad V^{\varepsilonsilon}(0):=V(0;t+\varepsilonsilon,x,v)=R_{x^1}v.
\end{equation*}
By product rule, Lemma \e^{\frac 12}f{nabla xv b} and Lemma \e^{\frac 12}f{d_n}, one obtains that
\begin{equation}gin{align*}
\nablala_v [X^{\varepsilonsilon}(0)]&=\nablala_v [x^1-(t^1+\varepsilonsilon)R_{x^1}v] =-t\left(I-\frac{v \otimes n(x^1)}{v\cdotot n(x^1)} \right) -R_{x^1}v \otimes \nablala_v t^1 -\varepsilonsilon \nablala_v (R_{x^1}v)\\
&=-t \left(I-\frac{v \otimes n(x^1)}{v\cdotot n(x^1)} \right)-t R_{x^1}v \otimes \frac{n(x^1}{v\cdotot n(x^1)} -\varepsilonsilon (R_x^1 + 2t A_{v,x^1})\\
&= -tR_{x^1} -\varepsilonsilon (R_{x^1}+2tA_{v,x^1}), \\
\nablala_v[V^{\varepsilonsilon}(0)]&= \nablala_v [R_{x^1}v] = R_{x^1}+2tA_{v,x^1}.
\end{align*}
Through the $v$-derivative of $X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0)$ above and \eqref{c_2},
\begin{equation}gin{equation} \label{nabla_tv f case2}
\begin{equation}gin{split}
\partialartial_t[\nablala_{v} f(t,x,v)]&= \lim_{\varepsilonsilon \rightarrow 0+} \frac{\nablala_v f(t+\varepsilonsilon,x,v) -\nablala_v f(t,x,v)}{\varepsilonsilon}\\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} (\nablala_v [f_0(X(0;t+\varepsilonsilon,x,v),V(0;t+\varepsilonsilon,x,v)]\\
&\quad -(-t\nablala_x f_0(X(0),R_{x^1}v)R_{x^1}+ \nablala_v f_0(X(0),R_{x^1}v)(R_{x^1}+2tA_{v,x^1})))\\
&=-\nablala_x f_0(x^1,R_{x^1}v)\left(R_{x^1}+2tA_{v,x^1}\right) -t \left[ \lim_{\varepsilonsilon\rightarrow 0+} \frac{1}{\varepsilonsilon} \left(\nablala_xf_0(X^{\varepsilonsilon}(0),R_{x^1}v) -\nablala_xf_0(X^{\varepsilonsilon}(0),R_{x^1}v)\right)\right] R_{x^1} \\
&\quad + \left [\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon}\left( \nablala_v f_0(X^{\varepsilonsilon}(0),R_{x^1}v) -\nablala_v f_0(X(0),R_{x^1}v) \right)\right]\left(R_{x^1}+2tA_{v,x^1}\right)\\
&\stackrel{r\leftrightarrow c}{=} -\left(R_{x^1}+2tA_{v,x^1}\right)^T\nablala_x f_0(x^1,R_{x^1}v)^T -tR_{x^1} \nablala_{xx}f_0(x^1,R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{X^{\varepsilonsilon}(0)-X(0)}{\varepsilonsilon} \\
&\quad + \left(R_{x^1}+2tA_{v,x^1}\right)^T \nablala_{xv} f_0(x^1,R_{x^1}v) \lim_{\varepsilonsilon \rightarrow 0+} \frac{X^{\varepsilonsilon}(0)-X(0)}{\varepsilonsilon}\\
&\stackrel{c\leftrightarrow r}{=} -\nablala_xf_0(x^1,R_{x^1}v) (R_{x^1}+2tA_{v,x^1}) -t(-v^T)R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v) R_{x^1} \\
&\quad +(-v^T) R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v) (R_{x^1}+2tA_{v,x^1}).
\end{split}
\end{equation}
Summing \eqref{nabla_tv f case1} and \eqref{nabla_tv f case2} yields that
\begin{equation}gin{equation} \label{tv comp}
\begin{equation}gin{split}
&-\nablala_x f_0(x^1,v) -t(-v^T) \nablala_{xx}f_0(x^1,v) + (-v^T) \nablala_{vx}f_0(x^1,v)\\
&= -\nablala_xf_0(x^1,R_{x^1}v) (R_{x^1}+2tA_{v,x^1})-t(-v^T)R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v) R_{x^1} \\
&\quad +(-v^T) R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v) (R_{x^1}+2tA_{v,x^1}).
\end{split}
\end{equation}
| 2,281 | 129,481 |
en
|
train
|
0.4976.32
|
\subsubsection{$C^2_{x,t}$} Similar to the $\nablala_{xv}$ case, using the same perturbation $\hat{r}_1$ of \eqref{set R_sp} and \eqref{c_3}, we have
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\nablala_x[\partialartial_t f(t,x,v)]\hat{r}_1&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{ \partialartial_t f(t,x+\varepsilonsilon \hat{r}_1,v)- \partialartial_t f(t,x,v)}{\varepsilonsilon} \\
&= \lim_{\varepsilonsilon\rightarrow 0+} \left(\frac{ \nablala_x f(t,x+\varepsilonsilon \hat{r}_1,v) - \nablala_x f(t,x,v)}{\varepsilonsilon}\right)(-v)\\
&= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left(\nablala_x [f_0(X(0;t,x+\varepsilonsilon \hat{r}_1,v),V(0;t,x+\varepsilonsilon \hat{r}_1,v))] -\nablala_x f_0(X(0),v)\right)(-v)\\
&= (-v^T) \nablala_{xx} f_0(x^1,v) \hat{r}_1,
\end{split}
\end{equation*}
where we have used $\nablala_x X^{\varepsilonsilon}(0)=I_2, \nablala_x V^{\varepsilonsilon}(0)=0$. Next, for $\hat{r}_2$ of \eqref{set R_sp}, using \eqref{Av=0} in Lemma \e^{\frac 12}f{lem_RA}, \eqref{c_4},\eqref{xx star1}, and \eqref{xx star2} gives
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\nablala_x[\partialartial_t f(t,x,v)] \hat{r}_2 &= \lim_{\varepsilonsilon \rightarrow 0+} \frac{ \partialartial_t f(t,x+\varepsilonsilon \hat{r}_2,v)- \partialartial_t f(t,x,v)}{\varepsilonsilon}\\
&= \lim_{\varepsilonsilon\rightarrow 0+} \left(\frac{ \nablala_x f(t,x+\varepsilonsilon \hat{r}_2,v) - \nablala_x f(t,x,v)}{\varepsilonsilon}\right)(-v)\\
&= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} (\nablala_x [f_0(X(0;t,x+\varepsilonsilon \hat{r}_2,v),V(0;t,x+\varepsilonsilon \hat{r}_2,v))] \\
&\quad - (\nablala_x f_0(X(0),R_{x^1}v)R_{x^1} -2 \nablala_v f_0(X(0),R_{x^1}v) A_{v,x^1}) )(-v)\\
&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_x f_0(X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0))\nablala_ xX^{\varepsilonsilon}(0) - \nablala_x f_0(X(0),R_{x^1}v)R_{x^1}\right) (-v) \\
&\quad + \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_v f_0(X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0))\nablala_x V^{\varepsilonsilon}(0) -\nablala_v f_0(X(0),R_{x^1}v)(-2A_{v,x^1}) \right) (-v) \\
&:=I_{xt,1}+I_{xt,2},
\end{split}
\end{equation*}
where
\begin{equation}gin{align*}
I_{xt,1}&=\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_x f_0(X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0))\nablala_ xX^{\varepsilonsilon}(0) - \nablala_x f_0(X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0)) \lim_{s \rightarrow 0-} \nablala_x X(s) \right. \\
&\quad + \left. \nablala_x f_0(X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0))\lim_{s\rightarrow 0-} \nablala_x X(s) - \nablala_x f_0(X(0),R_{x^1}v)R_{x^1}\right)(-v)\\
&= (-v^T) \begin{equation}gin{bmatrix}
\nablala_x f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^1) \\
\nablala_x f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^2)
\end{bmatrix} \hat{r}_2\\
&\quad +(-v^T)R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v)R_{x^1}\hat{r}_2 +(-v^T)R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v)(-2A_{v,x^1})\hat{r}_2,\\
I_{xt,2} &= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_v f_0(X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0))\nablala_x V^{\varepsilonsilon}(0) -\nablala_v f_0(X^\varepsilonsilon(0),V^{\varepsilonsilon}(0))\lim_{s\rightarrow 0-} \nablala_x V(s) \right. \\
&\quad + \left. \nablala_v f_0(X^\varepsilonsilon(0),V^{\varepsilonsilon}(0)) \lim_{s \rightarrow 0-} \nablala_x V(s) -\nablala_v f_0(X(0),R_{x^1}v)(-2A_{v,x^1})\right)(-v) \\
&=(-v^T) \begin{equation}gin{bmatrix}
\nablala_vf_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^2)
\end{bmatrix}\hat{r}_2 \\
& \quad +(-v^T)(-2A^T_{v,x^1}) \left( \nablala_{xv} f_0(x^1,R_{x^1}v) R_{x^1} +\nablala_{vv} f_0(x^1,R_{x^1}v) (-2A_{v,x^1}) \right) \hat{r}_2,\\
&=(-v^T) \begin{equation}gin{bmatrix}
\nablala_vf_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^2)
\end{bmatrix}\hat{r}_2.
\end{align*}
To sum up the above, we get the following condition:
\begin{equation}gin{equation} \label{xt comp}
\begin{equation}gin{split}
(-v^T) \nablala_{xx} f_0(x^1,v)&= (-v^T) \begin{equation}gin{bmatrix}
\nablala_x f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^1)\\
\nablala_x f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^2)\end{bmatrix}
+(-v^T)
\begin{equation}gin{bmatrix}
\nablala_vf_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^2)
\end{bmatrix}\\
&\quad +(-v^T)R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v)R_{x^1}+(-v^T)R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v)(-2A_{v,x^1}).
\end{split}
\end{equation}
| 2,578 | 129,481 |
en
|
train
|
0.4976.33
|
\subsubsection{$C^2_{v,t}$} Using the perturbation $\hat{r}_1$ of \eqref{set R_sp} and \eqref{c_3},
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\nablala_v [\partialartial_t f(t,x,v)] \hat{r}_1 &=\lim_{\varepsilonsilon\rightarrow 0+} \frac{ \partialartial_t f(t,x,v+\varepsilonsilon \hat{r}_1) -\partialartial_t f(t,x,v)}{\varepsilonsilon}\\
&=\lim_{\varepsilonsilon \rightarrow 0+} \left(\frac{\nablala_x f(t,x,v+\varepsilonsilon \hat{r}_1) (-(v+\varepsilonsilon \hat{r}_1)) -\nablala_x f(t,x,v) (-v)}{\varepsilonsilon} \right) \\
&=-\lim_{\varepsilonsilon \rightarrow 0+} \nablala_x [f_0(X(0;t,x,v+\varepsilonsilon \hat{r}_1), V(0;t,x,v+\varepsilonsilon \hat{r}_1))]\hat{r}_1 \\
&+\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} ( \nablala_x [f_0(X(0;t,x,v+\varepsilonsilon \hat{r}_1), V(0;t,x,v+\varepsilonsilon \hat{r}_1))]-\nablala_x f_0(X(0),v))(-v)\\
&=-\nablala_x f_0(X(0),v) \hat{r}_1 +\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} (\nablala_x f_0(X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0))-\nablala_x f_0(X(0),v))(-v)\\
&=-\nablala_x f_0(x^1,v)\hat{r}_1 +(-v^T) \nablala_{xx} f_0(x^1,v) (-t\hat{r}_1) +(-v^T) \nablala_{vx} f_0(x^1,v) \hat{r}_1,
\end{split}
\end{equation*}
where $X^{\varepsilonsilon}(0):=X(0;t,x,v+\varepsilonsilon \hat{r}_1) = x-t(v+\varepsilonsilon \hat{r}_1), V^{\varepsilonsilon}(0):=V(0;t,x,v+\varepsilonsilon \hat{r}_1) =v+\varepsilonsilon \hat{r}_1$. Similar to the case $\nablala_{vx}$, for the perturbation $\hat{r}_2$ of \eqref{set R_sp}, using \eqref{nabla XV_x-}, \eqref{c_4} and \eqref{Av=0} in Lemma \e^{\frac 12}f{lem_RA} yields:
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\nablala_v[\partialartial_t f(t,x,v)] \hat{r}_2 &= \lim_{\varepsilonsilon \rightarrow 0+} \frac{\partialartial_t f(t,x,v+\varepsilonsilon \hat{r}_2) -\partialartial_t f(t,x,v)}{\varepsilonsilon}\\
&=\lim_{\varepsilonsilon \rightarrow 0+} \left( \frac{ \nablala_x f(t,x,v+\varepsilonsilon \hat{r}_2)(-(v+\varepsilonsilon \hat{r}_2))-\nablala_x f(t,x,v)(-v)}{\varepsilonsilon}\right)\\
&=-\lim_{\varepsilonsilon \rightarrow 0+} \nablala_x [f_0(X(0;t,x,v+\varepsilonsilon \hat{r}_2),V(0;t,x,v+\varepsilonsilon \hat{r}_2))]\hat{r}_2 \\
&\quad +\lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_x [f_0(X(0;t,x,v+\varepsilonsilon \hat{r}_2), V(0;t,x,v+\varepsilonsilon \hat{r}_2))] \right. \\
&\quad - \left. \left(\nablala_x f_0(x^1,R_{x^1}v)R_{x^1}-2\nablala_v f_0(x^1,R_{x^1}v)A_{v,x^1} \right)\right)(-v)\\
&=-\left(\nablala_x f_0(x^1,R_{x^1}v) R_{x^1} +\nablala_v f_0(x^1,R_{x^1}v)(-2A_{v,x^1})\right) \hat{r}_2 \\
&\quad + \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_x f_0(X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0))\nablala_x X^{\varepsilonsilon}(0) -\nablala_x f_0(X(0),R_{x^1}v)R_{x^1}\right)(-v) \\
&\quad + \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_v f_0(X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0))\nablala_x V^{\varepsilonsilon}(0)-\nablala_v f_0(X(0),R_{x^1}v)(-2A_{v,x^1})\right)(-v)\\
&:=-\left(\nablala_x f_0(x^1,R_{x^1}v) R_{x^1} +\nablala_v f_0(x^1,R_{x^1}v)(-2A_{v,x^1})\right) \hat{r}_2 + I_{vt,1}+I_{vt,2},
\end{split}
\end{equation*}
where
\begin{equation}gin{align*}
I_{vt,1}&:= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_x f_0(X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0))\nablala_x X^{\varepsilonsilon}(0)-\nablala_x f_0(X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0))\lim_{s\rightarrow 0-} \nablala_x X(s) \right. \\
&\quad + \left. \nablala_x f_0(X^\varepsilonsilon(0),V^\varepsilonsilon(0))\lim_{s \rightarrow 0-} \nablala_x X(s) - \nablala_xf_0(X(0),R_{x^1}v)R_{x^1}\right) (-v) \\
&=(-v^T) \begin{equation}gin{bmatrix}
\nablala_x f_0(x^1,R_{x^1}v) \nablala_v (R_{x^1(x,v)}^1) \\ \nablala_x f_0(x^1,R_{x^1}v) \nablala_v (R_{x^1(x,v)}^2)
\end{bmatrix}\hat{r}_2 \\
&\quad + (-v^T)R_{x^1} \left( \nablala_{xx} f_0(x^1,R_{x^1}v) (-tR_{x^1}) +\nablala_{vx} f_0(x^1,R_{x^1}v) (R_{x^1}+2tA_{v,x^1})\right)\hat{r}_2,\\
I_{vt,2}&:= \lim_{\varepsilonsilon \rightarrow 0+} \frac{1}{\varepsilonsilon} \left( \nablala_v f_0(X^{\varepsilonsilon}(0),V^{\varepsilonsilon}(0))\nablala_x V^{\varepsilonsilon}(0)-\nablala_v f_0(X^\varepsilonsilon(0),V^\varepsilonsilon(0))\lim_{s\rightarrow 0-} \nablala_xV(s) \right. \\
& \quad \left.+ \nablala_v f_0(X^\varepsilonsilon(0),V^\varepsilonsilon(0))\lim_{s\rightarrow 0-} \nablala_xV(s) -\nablala_v f_0(X(0),R_{x^1}v)(-2A_{v,x^1}) \right)(-v)\\
&=(-v^T)\begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_v (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_v (-2A_{v,x^1(x,v)}^2)
\end{bmatrix}\hat{r}_2\\
&\quad +(-v^T)(-2A^T_{v,x^1}) \left( \nablala_{xv} f_0(x^1,R_{x^1}v)(-tR_{x^1}) +\nablala_{vv} f_0(x^1,R_{x^1}v)(R_{x^1}+2tA_{v,x^1}) \right) \hat{r}_2\\
&=(-v^T)\begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_v (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_v (-2A_{v,x^1(x,v)}^2)
\end{bmatrix}\hat{r}_2.
\end{align*}
Thus, we have the following compatibility condition:
\begin{equation}gin{equation} \label{vt comp}
\begin{equation}gin{split}
&-\nablala_x f_0(x^1,v) +tv^T\nablala_{xx} f_0(x^1,v) +(-v^T) \nablala_{vx} f_0(x^1,v) \\
&=- \left(\nablala_x f_0(x^1,R_{x^1}v) R_{x^1} +\nablala_v f_0(x^1,R_{x^1}v)(-2A_{v,x^1})\right)\\
&\quad +(-v^T) \begin{equation}gin{bmatrix}
\nablala_x f_0(x^1,R_{x^1}v) \nablala_v (R_{x^1(x,v)}^1) \\ \nablala_x f_0(x^1,R_{x^1}v) \nablala_v (R_{x^1(x,v)}^2)
\end{bmatrix} + (-v^T) \begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_v (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_v (-2A_{v,x^1(x,v)}^2)
\end{bmatrix}\\
&\quad +tv^T R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v) R_{x^1} +(-v^T)R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v)(R_{x^1}+2tA_{v,x^1}).
\end{split}
\end{equation}
| 3,264 | 129,481 |
en
|
train
|
0.4976.34
|
\subsubsection{Derive $C^2_{tt},C^2_{tx}, C^2_{tv},C^2_{xt},C^2_{vt}$ compatibility conditions from \eqref{Cond2 1}--\eqref{Cond2 4},\eqref{Cond3} and \eqref{Cond4}} So far, we have derived \eqref{Cond2 1}--\eqref{Cond2 4} to satisfy $f\in C^2_{xv},C^2_{xx},C^2_{vx},C^2_{vv}$. In \eqref{Cond2 1}--\eqref{Cond2 4}, since $\nablala_{xv} f_0(x^1,v)$ is the same as $\nablala_{vx} f_0(x^1,v)^T$, we need to assume \eqref{Cond3}. Similarly, we obtained \eqref{Cond3} because $\nablala_{xx} f_0(x^1,v)$ is a symmetric matrix. In this subsection, we will show that the compatibility conditions $C^2_{tt}$ \eqref{time cond}, $C^2_{tx}$ \eqref{tx comp}, $C^2_{tv}$ \eqref{tv comp}, $C^2_{xt}$ \eqref{xt comp}, and $C^2_{vt}$ \eqref{vt comp} are induced under \eqref{Cond2 1}--\eqref{Cond2 4},\eqref{Cond3}, and \eqref{Cond4}. Firstly, we consider $C^2_{tt}$ compatibility condition. Using \eqref{Av=0} in Lemma \e^{\frac 12}f{lem_RA}, \eqref{prop d_R}, and \eqref{prop d_A}, one has
\begin{equation}gin{equation*}
\begin{equation}gin{split}
v^T \nablala_{xx}f_0(x^1,v) v &= v^T \begin{equation}igg(R_{x^{1}} \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + R_{x^{1}} \nablala_{vx}f_{0}(x^{1}, R_{x^1}v)(-2A_{v,x^{1}}) \\
&\quad + (-2A^{T}_{v,x^{1}}) \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + (-2A^{T}_{v,x^{1}}) \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \\
&\quad + \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix}
- 2
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(A_{v,x^{1}(x,v)}^2)
\end{bmatrix}
\begin{equation}igg) v\\
&=v^TR_{x^1}\nablala_{xx}f_0(x^1,R_{x^1}v)R_{x^1} v +v^T \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix} v \\
&\quad + v^T \begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^2)
\end{bmatrix}v\\
&= (R_{x^1}v)^T \nablala_{xx} f_0(x^1,R_{x^1}v) (R_{x^1}v).
\end{split}
\end{equation*}
In \eqref{tx comp}, the left-hand side is
\begin{equation}gin{align*}
(-v^T) \nablala_{xx} f_0(x^1,v) &= (-v^T) \begin{equation}igg(R_{x^{1}} \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + R_{x^{1}} \nablala_{vx}f_{0}(x^{1}, R_{x^1}v)(-2A_{v,x^{1}}) \\
&\quad + (-2A^{T}_{v,x^{1}}) \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} + (-2A^{T}_{v,x^{1}}) \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \\
&\quad + \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix}
- 2
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(A_{v,x^{1}(x,v)}^2)
\end{bmatrix}
\begin{equation}igg)\\
&= (-v^T) R_{x^1}\nablala_{xx} f_0(x^1,R_{x^1}v) R_{x^1} + (-v^T)R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1}) \\
&\quad + (-v^T) \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix} +(-v^T) \begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^2)
\end{bmatrix},
\end{align*}
where we have used \eqref{Av=0}. When we assume \eqref{Cond4}, it holds that $\nablala_{xx}f_0(x^1,v)$ is a symmetric matrix. In other words,
\begin{equation}gin{align*}
&\left(\begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix} +
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^2)
\end{bmatrix}\right)^T \\
&= \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix} +
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^2)
\end{bmatrix},
\end{align*}
which implies that
\begin{equation}gin{equation} \label{vRA prop}
\begin{equation}gin{split}
&(-v^T) \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix} +(-v^T) \begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^2)
\end{bmatrix}\\
| 2,800 | 129,481 |
en
|
train
|
0.4976.35
|
&=\left( \left( \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^1)
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(R_{x^{1}(x,v)}^2)
\end{bmatrix} + \begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^1)
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}(-2A_{v,x^{1}(x,v)}^2)
\end{bmatrix}\right)(-v)\right)^T=0,
\end{split}
\end{equation}
due to \eqref{prop d_R} and \eqref{prop d_A}. Therefore, the left-hand side in \eqref{tx comp} becomes
\begin{equation}gin{equation} \label{tx comp left}
(-v^T) \nablala_{xx} f_0(x^1,v) = (-v^T) R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v) R_{x^1} + (-v^T) \nablala_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1}).
\end{equation}
Using \eqref{vRA prop}, the right-hand side in \eqref{tx comp} is
\begin{equation}gin{equation}\label{tx comp right}
\begin{equation}gin{split}
&(-v)^T \begin{equation}gin{bmatrix}
\nablala_x f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^1) \\ \nablala_x f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^2)
\end{bmatrix}^T+(-v)^T \begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_x(-2A_{v,x^1(x,v)}^2)
\end{bmatrix}^T\\
& \quad +(-v^T)R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v)R_{x^1}+(-v^T) R_{x^1} \nablala_{vx}f_0(x^1,R_{x^1}v) (-2A_{v,x^1})\\
& =(-v^T)R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v)R_{x^1}+(-v^T) R_{x^1} \nablala_{vx}f_0(x^1,R_{x^1}v) (-2A_{v,x^1}).\\
\end{split}
\end{equation}
From \eqref{tx comp left} and \eqref{tx comp right}, we derive \eqref{tx comp} under the assumption \eqref{Cond2 1}--\eqref{Cond2 4},\eqref{Cond3}, and \eqref{Cond4}. For the left-hand side in \eqref{tv comp}, we use \eqref{Av=0}, the $C^1$ compatibility condition \eqref{c_x}, \eqref{Cond2 1}--\eqref{Cond2 4}, and \eqref{vRA prop}:
\begin{equation}gin{align*}
&-\nablala_x f_0(x^1,v) +tv^T \nablala_{xx} f_0(x^1,v) + (-v^T) \nablala_{vx} f_0(x^1,v) \\
&= -\nablala_x f_0(x^1,R_{x^1}v)R_{x^1} - \nablala_v f_0(x^1,R_{x^1}v) (-2A_{v,x^1}) \\
&\quad +tv^TR_{x^1}\nablala_{xx} f_0(x^1,R_{x^1}v)R_{x^1} +tv^T R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1})\\
&\quad +(-v^T)R_{x^1} \nablala_{vx}f_0(x^1,R_{x^1}v)R_{x^1} +(-v)^T \begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v)\nablala_v(-2A_{v,x^1}^1)\\ \nablala_v f_0(x^1,R_{x^1}v)\nablala_v(-2A_{v,x^1}^2)
\end{bmatrix}.
\end{align*}
Since $\nablala_{xv}f_0(X(0),v)^T = \nablala_{vx} f_0(X(0),v)$ under \eqref{Cond3}, it holds that
\begin{equation}gin{equation} \label{RA prop}
\begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v)\nablala_v(-2A_{v,x^1}^1)\\ \nablala_v f_0(x^1,R_{x^1}v)\nablala_v(-2A_{v,x^1}^2)
\end{bmatrix}^T = \begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^1)\\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^2)
\end{bmatrix}.
\end{equation}
Since \eqref{RA} in Lemma \e^{\frac 12}f{lem_RA}, \eqref{prop d_R}, \eqref{Cond3}, and the formula \eqref{RA prop} above, it follows that
\begin{equation}gin{equation} \label{A prop}
\begin{equation}gin{split}
&\nablala_v f_0(x^1,R_{x^1}v) (-2A_{v,x^1}) = C(R_{x^1}v)^T (-2A_{v,x^1})=-\frac{2C}{v\cdotot n(x^1)} v^T (Qv) \otimes (Qv) =0, \\
&(-v)^T\begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v)\nablala_v(-2A_{v,x^1}^1)\\ \nablala_v f_0(x^1,R_{x^1}v)\nablala_v(-2A_{v,x^1}^2)
\end{bmatrix}= \left( \begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^2)
\end{bmatrix} (-v) \right)^T=0,
\end{split}
\end{equation}
where $C$ is an arbitrary constant. And then, one obtains that
\begin{equation}gin{equation} \label{tv comp left}
\begin{equation}gin{split}
&-\nablala_x f_0(x^1,v) +tv^T \nablala_{xx} f_0(x^1,v) + (-v^T) \nablala_{vx} f_0(x^1,v) \\
&= -\nablala_x f_0(x^1,R_{x^1}v)R_{x^1} +tv^TR_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v)R_{x^1} +tv^T R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1})\\
&\quad +(-v^T)R_{x^1} \nablala_{vx}f_0(x^1,R_{x^1}v)R_{x^1}.
\end{split}
\end{equation}
By \eqref{Av=0} and \eqref{Cond4}, the right-hand side in \eqref{tv comp} is
\begin{equation}gin{equation*}
\begin{equation}gin{split}
& -\nablala_xf_0(x^1,R_{x^1}v) (R_{x^1}+2tA_{v,x^1})-t(-v^T)R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v) R_{x^1} \\
&\quad +(-v^T) R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v) (R_{x^1}+2tA_{v,x^1}) \\
&=-\nablala_x f_0(x^1,R_{x^1}v) R_{x^1} -2Ct (R_{x^1}v)^TA_{v,x^1} +tv^T R_{x^1} \nablala_{xx}f_0(x^1,R_{x^1}v) R_{x^1} \\
&\quad +tv^T R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1}) + (-v^T) R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v) R_{x^1}\\
& = -\nablala_x f_0(x^1,R_{x^1}v)R_{x^1} +tv^TR_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v)R_{x^1} +tv^T R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1})\\
&\quad +(-v^T)R_{x^1} \nablala_{vx}f_0(x^1,R_{x^1}v)R_{x^1},
\end{split}
\end{equation*}
where $C$ is an arbitrary constant. Thus, the left-hand side in \eqref{tv comp} is the same as the right-hand side in \eqref{tv comp} under \eqref{Cond2 1}--\eqref{Cond2 4}, \eqref{Cond3}, and \eqref{Cond4}. The left-hand side in \eqref{xt comp} is as follows:
\begin{equation}gin{equation*}
(-v^T) \nablala_{xx}f_0(x^1,v) = (-v^T) R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v) R_{x^1} +(-v^T) \nablala_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1}),
\end{equation*}
by \eqref{tx comp left}. Using \eqref{vRA prop}, the right-hand side in \eqref{xt comp} can be further computed by
\begin{equation}gin{equation*}
\begin{equation}gin{split}
&(-v^T) \begin{equation}gin{bmatrix}
\nablala_x f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^1) \\
\nablala_x f_0(x^1,R_{x^1}v) \nablala_x (R_{x^1(x,v)}^2)\end{bmatrix}
+(-v^T)
\begin{equation}gin{bmatrix}
\nablala_vf_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^2)
\end{bmatrix}\\
&\quad +(-v^T)R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v)R_{x^1}+(-v^T)R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v)(-2A_{v,x^1})\\
&= (-v^T)R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v)R_{x^1}+(-v^T)R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v)(-2A_{v,x^1}).
\end{split}
\end{equation*}
Hence, the \eqref{xt comp} condition can be deduced by \eqref{Cond2 1}--\eqref{Cond2 4},\eqref{Cond3}, and \eqref{Cond4}. Finally, the \eqref{vt comp} condition is the last remaining case. The left-hand side in \eqref{vt comp} comes from \eqref{tv comp left}:
\begin{equation}gin{align*}
&-\nablala_x f_0(x^1,v) +tv^T \nablala_{xx} f_0(x^1,v) + (-v^T) \nablala_{vx} f_0(x^1,v) \\
&= -\nablala_x f_0(x^1,R_{x^1}v)R_{x^1} +tv^TR_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v)R_{x^1} +tv^T R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v) (-2A_{v,x^1})\\
&\quad +(-v^T)R_{x^1} \nablala_{vx}f_0(x^1,R_{x^1}v)R_{x^1}.
\end{align*}
| 3,955 | 129,481 |
en
|
train
|
0.4976.36
|
Since \eqref{Av=0} in Lemma \e^{\frac 12}f{lem_RA}, \eqref{vRA prop}, \eqref{A prop}, and
\begin{equation}gin{align*}
\quad \begin{equation}gin{bmatrix}
\nablala_x f_0(x^1,R_{x^1}v) \nablala_v(R_{x^1(x,v)}^1) \\ \nablala_x f_0(x^1,R_{x^1}v) \nablala_v(R_{x^1(x,v)}^2)
\end{bmatrix}&= (-t)\begin{equation}gin{bmatrix}
\nablala_x f_0(x^1,R_{x^1}v) \nablala_x(R_{x^1(x,v)}^1) \\\nablala_x f_0(x^1,R_{x^1}v) \nablala_x(R_{x^1(x,v)}^2)
\end{bmatrix},\\
\begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_v (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_v (-2A_{v,x^1(x,v)}^2)
\end{bmatrix}&=(-t) \begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^2)
\end{bmatrix}+ \begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_v (-2A_{v,x^1}^1)\\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_v (-2A_{v,x^1}^2)
\end{bmatrix},
\end{align*}
the right-hand side in \eqref{vt comp} can be simplified as
\begin{equation}gin{equation*}
\begin{equation}gin{split}
& - \left(\nablala_x f_0(x^1,R_{x^1}v) R_{x^1} +\nablala_v f_0(x^1,R_{x^1}v)(-2A_{v,x^1})\right)\\
&\quad +(-v^T) \begin{equation}gin{bmatrix}
\nablala_x f_0(x^1,R_{x^1}v) \nablala_v (R_{x^1(x,v)}^1) \\ \nablala_x f_0(x^1,R_{x^1}v) \nablala_v (R_{x^1(x,v)}^2)
\end{bmatrix} + (-v^T) \begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_v (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_v (-2A_{v,x^1(x,v)}^2)
\end{bmatrix}\\
&\quad + tv^T R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v) R_{x^1} +(-v^T)R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v)(R_{x^1}+2tA_{v,x^1}) \\
&=-\nablala_x f_0(x^1,R_{x^1}v)R_{x^1} +tv^T \begin{equation}gin{bmatrix}
\nablala_x f_0(x^1,R_{x^1}v) \nablala_x(R_{x^1(x,v)}^1) \\\nablala_x f_0(x^1,R_{x^1}v) \nablala_x(R_{x^1(x,v)}^2)
\end{bmatrix}+tv^T \begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_x (-2A_{v,x^1(x,v)}^2)
\end{bmatrix} \\
&\quad +(-v^T) \begin{equation}gin{bmatrix}
\nablala_v f_0(x^1,R_{x^1}v) \nablala_v (-2A_{v,x^1}^1) \\ \nablala_v f_0(x^1,R_{x^1}v) \nablala_v (-2A_{v,x^1}^2)
\end{bmatrix}
+ tv^T R_{x^1} \nablala_{xx} f_0(x^1,R_{x^1}v) R_{x^1} \\
&\quad +(-v^T)R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v)(R_{x^1}+2tA_{v,x^1})\\
&= -\nablala_x f_0(x^1,R_{x^1}v)R_{x^1}+tv^T R_{x^1}\nablala_{xx} f_0(x^1,R_{x^1}v) R_{x^1}+tv^T R_{x^1}\nablala_{vx} f_0(x^1,R_{x^1}v)(-2A_{v,x^1}) \\
&\quad +(-v^T)R_{x^1} \nablala_{vx} f_0(x^1,R_{x^1}v) R_{x^1}.
\end{split}
\end{equation*}
Hence, the \eqref{vt comp} condition can be obtained under \eqref{Cond2 1}--\eqref{Cond2 4},\eqref{Cond3}, and \eqref{Cond4}. \\
\hide
| 1,738 | 129,481 |
en
|
train
|
0.4976.37
|
\subsubsection{Symmetric presentation of \eqref{Cond} under assumption \eqref{if}}
Above can be more simplified. Note that the 3rd condition of \eqref{Cond2 1}--\eqref{Cond2 4} is just obvious by specular reflection BC (taking $\nablala_{v}$ twice). From 1st and 4th condition, we have
\begin{equation}gin{equation}
\begin{equation}gin{split}
(-2A^{T}_{v,x^{1}}) \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}} &= (-2A^{T}_{v,x^{1}}) R_{x^{1}} \nablala_{xv}f_{0}(x^{1}, v) - (-2A^{T}_{v,x^{1}}) \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) (-2A_{v,x^{1}}) \\
R_{x^{1}} \nablala_{vx}f_{0}(x^{1}, R_{x^1}v)(-2A_{v,x^{1}}) &= \nablala_{vx}f_{0}(x^{1}, v)R_{x^{1}}(-2A_{v,x^{1}}) - (-2A^{T}_{v,x^{1}})\nablala_{vv}f_{0}(x^{1}, R_{x^1}v)(-2A_{v,x^{1}}).
\end{split}
\end{equation}
Plugging into 2nd condition, \eqref{Cond2 2} is rewritten as
\begin{equation}gin{equation} \label{re 2}
\begin{equation}gin{split}
& \nablala_{xx}f_{0}(x^{1},v) + \nablala_{vx}f_{0}(x^{1}, v)R_{x^{1}}A_{v,x^{1}} + (R_{x^{1}}A_{v,x^{1}})^{T} \nablala_{xv}f_{0}(x^{1}, v) \\
&= R_{x^{1}}\nablala_{xx}f_{0}(x^{1}, R_{x^1}v)R_{x^{1}} + R_{x^{1}}\nablala_{vx}f_{0}(x^{1}, R_{x^1}v)(-A_{v,x^{1}} ) \\
&\quad + (-A^{T}_{v,x^{1}}) \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) R_{x^{1}}
+
{\color{blue}
\begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}[R_{x^{1}}]_{1}
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}[R_{x^{1}}]_{2}
\end{bmatrix}
}
\end{split}
\end{equation}
\\
{\bf Conclusion}
From \eqref{Cond2 3} and Lemma \e^{\frac 12}f{lem_RA}, \\
the \eqref{Cond2 1} can be written as symmetric form \\
\begin{equation}gin{equation} \label{sym Cond2_1}
\begin{equation}gin{split}
R_{x^{1}} \begin{equation}ig[ \nablala_{xv}f_{0}(x^{1},v) + \nablala_{vv}f_{0}(x^{1},v) \frac{ (Qv)\otimes (Qv)}{v\cdotot n} \begin{equation}ig] R_{x^{1}}
&= \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) + \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) \frac{(QR_{x^1}v)\otimes (QR_{x^1}v)}{R_{x^1}v\cdotot n(x^1)} .
\end{split}
\end{equation}
Similarly, 4th one give
\begin{equation}gin{equation} \label{sym Cond2_2}
\begin{equation}gin{split}
R_{x^{1}} \begin{equation}ig[ \nablala_{vx}f_{0}(x^{1},v) + \frac{ (Qv)\otimes (Qv)}{v\cdotot n} \nablala_{vv}f_{0}(x^{1}, v) \begin{equation}ig] R_{x^{1}}
&= \nablala_{vx}f_{0}(x^{1}, R_{x^1}v) + \frac{(QR_{x^1}v)\otimes (QR_{x^1}v)}{R_{x^1}v\cdotot n(x^1)} \nablala_{vv}f_{0}(x^{1}, R_{x^1}v) .
\end{split}
\end{equation}
\eqref{re 2} condition (\eqref{Cond2 2}) yields
\begin{equation}gin{equation} \label{sym Cond2_3}
\begin{equation}gin{split}
&R_{x^{1}}\begin{equation}ig[ \nablala_{xx}f_{0}(x^{1},v) + \nablala_{vx}f_{0}(x^{1}, v) \frac{ (Qv)\otimes (Qv)}{v\cdotot n} + \frac{ (Qv)\otimes (Qv)}{v\cdotot n} \nablala_{xv}f_{0}(x^{1}, v) \begin{equation}ig] R_{x^{1}} \\
&= \nablala_{xx}f_{0}(x^{1}, R_{x^1}v) + \nablala_{vx}f_{0}(x^{1}, R_{x^1}v)\frac{(QR_{x^1}v)\otimes (QR_{x^1}v)}{R_{x^1}v\cdotot n(x^1)}
+ \frac{(QR_{x^1}v)\otimes (QR_{x^1}v)}{R_{x^1}v\cdotot n(x^1)} \nablala_{xv}f_{0}(x^{1}, R_{x^1}v) \\
&\quad +
{\color{blue}
\underbrace{
R_{x^{1}}
\begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}[R_{x^{1}}]_{1}
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_{x}[R_{x^{1}}]_{2}
\end{bmatrix}
R_{x^{1}}
}_{(?)}
}
\end{split}
\end{equation}
\unhide
| 1,771 | 129,481 |
en
|
train
|
0.4976.38
|
\subsection{Proof of Theorem \e^{\frac 12}f{thm 2}}
\begin{equation}gin{proof} [Proof of Theorem \e^{\frac 12}f{thm 2}]
By the same argument of the proof of Theorem \e^{\frac 12}f{thm 1}, it suffices to set $k=1$. Through this section, we have shown that \eqref{Cond2 1}--\eqref{Cond2 4}, \eqref{Cond3}, and \eqref{Cond4} yield $C^{2}_{t,x,v}$ regularity of $f(t,x,v)$ of \eqref{solution}. However, \eqref{Cond2 3} is just an obvious consequence of \eqref{BC} and \eqref{Cond2 4} is identical to \eqref{Cond2 1} since we assume \eqref{C2 cond34} which is the same as \eqref{Cond3} and \eqref{Cond4}. So, we omit \eqref{Cond2 3} and \eqref{Cond2 4} in the statement.
In Remark \e^{\frac 12}f{extension C2 cond34}, under \eqref{C2 cond34}, we derived that
\begin{equation}gin{equation*}
\nablala_x f_0(x,v)R_x = \nablala_x f_0(x,R_xv) \quad \textrm{and} \quad \nablala_v f_0(x,v) \frac{(Qv) \otimes (Qv)}{v\cdotot n} R_x = \nablala_v f_0(x,R_xv)\frac{(QR_xv)\otimes(QR_xv)}{R_x v\cdotot n},
\end{equation*}
for all $(x,v) \in \gamma_- \cup \gamma_+$. In Remark \e^{\frac 12}f{example}, we showed that
\begin{equation}gin{equation*}
f_0(x,v)=G(x,\vert v \vert), \quad (x,v) \in \partialartial \Omega \times \mathbb{R}^2,
\end{equation*}
where $G$ is a $C^1_{x,v}$ function. Notice that the function $G$ must be $C^2_{x,v}$ to be $f_0 \in C^2_{x,v}(\bar\Omega\times \mathbb{R}^2)$ in Theorem \e^{\frac 12}f{thm 2}. Since $f_0(x,v)=G(x,\vert v \vert)$ be a radial function with respect to $v$ and $\nablala_x f_0(x,v) \partialarallel v^T$ for all $(x,v)\in \gamma_-\cup \gamma_+$, $\nablala_x f_0(x,v)$ must be $0$ on $\partialartial \Omega$.
Now let us change \eqref{Cond2 1} and \eqref{Cond2 2} into symmetric forms. First, we multiply \eqref{Cond2 1} by $R_{x^1}$ from both left and right. Then applying \eqref{Cond2 3} and \eqref{RA}, we obtain
\begin{equation}gin{align*}
R_{x^1} \begin{equation}ig[ \nablala_{xv}f_{0}(x^1,v) + \nablala_{vv}f_{0}(x^1,v) \frac{ (Qv)\otimes (Qv)}{v\cdotot n(x^1)} \begin{equation}ig] R_{x^1}
&= \nablala_{xv}f_{0}(x^1, R_{x^1}v) + \nablala_{vv}f_{0}(x^1, R_{x^1}v) \frac{(QR_{x^1}v)\otimes (QR_{x^1}v)}{R_{x^1}v\cdotot n(x^1)} \notag \\
&\quad+
R_{x^1}
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^1 , R_{x^1}v) \nablala_x(R^1_{x^1(x,v)})
\\
\nablala_{v}f_{0}(x^1, R_{x^1}v) \nablala_x(R^2_{x^1(x,v)})
\end{bmatrix}
R_{x^1}.
\end{align*}
Also, plugging the above into \eqref{Cond2 2} and using \eqref{RA} again, we obtain
\begin{equation}gin{align*}
&R_{x^1}\begin{equation}ig[ \nablala_{xx}f_{0}(x^1,v) + \nablala_{vx}f_{0}(x^1, v) \frac{ (Qv)\otimes (Qv)}{v\cdotot n(x^1)} + \frac{ (Qv)\otimes (Qv)}{v\cdotot n(x^1)} \nablala_{xv}f_{0}(x^1, v) \begin{equation}ig] R_{x^1} \\
&= \nablala_{xx}f_{0}(x^1, R_{x^1}v) + \nablala_{vx}f_{0}(x^1, R_{x^1}v)\frac{(QR_{x^1}v)\otimes (QR_{x^1}v)}{R_{x^1}v\cdotot n(x^1)}
+ \frac{(QR_{x^1}v)\otimes (QR_{x^1}v)}{R_{x^1}v\cdotot n(x^1)} \nablala_{xv}f_{0}(x^1, R_{x^1}v) \\
&\quad -2R_x^1
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}A^{1}_{v,x^1}
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_{v}A^{2}_{v,x^1}
\end{bmatrix} R_{x^1}A_{v,x^1}R_{x^1}
+
A_{v,x^1}\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_x(R^1_{x^1(x,v)}) \\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_x(R^2_{x^1(x,v)})
\end{bmatrix}R_{x^1} \\
&\quad
+ R_{x^1} \begin{equation}gin{bmatrix}
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_x (R^1_{x^1(x,v)})
\\
\nablala_{x}f_{0}(x^{1}, R_{x^1}v) \nablala_x(R^2_{x^1(x,v)})
\end{bmatrix} R_{x^1}
- 2 R_{x^1}
\begin{equation}gin{bmatrix}
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_x(A^1_{v,x^1(x,v)})
\\
\nablala_{v}f_{0}(x^{1}, R_{x^1}v) \nablala_x(A^2_{v,x^1(x,v)})
\end{bmatrix} R_{x^1}.
\end{align*}
By Lemma \e^{\frac 12}f{d_RA} and Lemma \e^{\frac 12}f{dx_A}, $\nablala_x(R^1_{x^1(x,v)}), \nablala_x(R^2_{x^1(x,v)}), \nablala_x(A^1_{v,x^1(x,v)})$, and $\nablala_v(A^2_{v,x^1(x,v)})$ depend only on $n(x^1)$ and $v$. We rewrite $x^1$ as $x$ for $(x,v) \in \gamma_-$ because $n(x^1)=x^1$. Since $\nablala_x f_0(x,v)=0$ for $x\in \partialartial \Omega$, we obtain \eqref{C2 cond 1} and \eqref{C2 cond 2}.
Lastly, we will prove that $f(t,x,v)$ is not of class $C^2_{t,x,v}$ at time $t$ such that $t^k(t,x,v)=0$ for some $k$ if one of these conditions \eqref{C2 cond34}, \eqref{C2 cond 1}, and \eqref{C2 cond 2} for $(x,v)\in \gamma_-$ does not hold. Similar to the proof of Theorem \e^{\frac 12}f{thm 1}, it suffices to set $k=1$ and prove that $f(t,x,v)$ is not a class of $C^2_{t,x,v}$ at time $t$ satisfying $t^1(t,x,v)=0$.
Let $t^*$ be time $t$ such that $t^1(t,x,v)=0$. Remind that the condition \eqref{C2 cond34} was necessary to satisfy $\nablala_{xv}^Tf_0(x,v)=\nablala_{vx}f_0(x,v)$ and $\nablala_{xx}^T f_0(x,v) = \nablala_{xx}f_0(x,v)$ for $x\in \partialartial \Omega$. In other words, $\nablala_{xv}f_0(x,v)^T \neq \nablala_{vx} f_0(x,v)$ and $\nablala_{xx}^Tf_0(x,v)\neq \nablala_{xx}f_0(x,v)$ without \eqref{C2 cond34}. In $\nablala_{xv}$ and $\nablala_{vx}$ with direction $\hat{r}_1$, we derived \eqref{nabla_xv f case1} and \eqref{nabla_vx f case1} at $t^*$:
\begin{equation}gin{equation*}
\nablala_{xv}f(t,x,v) = (-t) \nablala_{xx} f_0(x^1,v) + \nablala_{xv} f_0(x^1,v), \quad \nablala_{vx}f(t,x,v)=(-t) \nablala_{xx}f_0(x^1,v)+\nablala_{vx}f_0(x^1,v).
\end{equation*}
Thus, if \eqref{C2 cond34} is not provided, $\nablala_{xv}^T f(t,x,v) \neq \nablala_{vx}f(t,x,v)$. This implies that $f(t,x,v)$ is not $C^2_{t,x,v}$ at time $t^*$. Next, we do not assume \eqref{C2 cond 1} for $(x,v)\in \gamma_-$. The condition \eqref{C2 cond 1} is derived from $\nablala_{xv}$ compatibility condition \eqref{Cond2 1}. Therefore, directional derivatives \eqref{nabla_xv f case1} and \eqref{nabla_xv f case2} with respect to $\hat{r}_1$ and $\hat{r}_2$ are not the same. It means that $f(t,x,v)$ is not $C^2_{t,x,v}$ at time $t^*$.
Finally, we assume that \eqref{C2 cond 2} does not hold for $(x,v)\in \gamma_-$.
The condition \eqref{C2 cond 2} comes from $\nablala_{xx},\nablala_{xv}$ and $\nablala_{vx}$ compatibility conditions \eqref{Cond2 1}, \eqref{Cond2 2}, and \eqref{Cond2 4}. One may assume without loss of generality that the initial data $f_0$ satisfies \eqref{C2 cond34} and \eqref{C2 cond 1}. Then, only $\nablala_{xx}$ compatibility condition \eqref{Cond2 2} is not satisfied. Similar to the above, directional derivatives $\nablala_{xx}$ with respect to $\hat{r}_1$ and $\hat{r}_2$ are not the same. Then, $f(t,x,v)$ is not $C^2_{t,x,v}$ at time $t^*$ without \eqref{C2 cond 2}. This finishes the proof.
\end{proof}
| 3,285 | 129,481 |
en
|
train
|
0.4976.39
|
\section{Regularity estimate of $f$}
| 11 | 129,481 |
en
|
train
|
0.4976.40
|
\subsection{First order estimates of characteristics} Using Definition \e^{\frac 12}f{notation},
\begin{equation}gin{equation*}
V(0;t,x,v) = R_{\ell} R_{\ell-1} \cdotots R_{2} R_{1} v, \quad \text{for some $\ell$ such that}\quad t^{\ell+1} < 0 \leq t^{\ell},
\end{equation*}
where
\[
R_{j} = I - 2 n(x^{j})\otimes n(x^{j}).
\]
For above $\ell$,
\begin{equation}gin{equation*}
X(0;t,x,v) = x^{\ell} - v^{\ell}t^{\ell},
\end{equation*}
where inductively,
\[
x^{k} = x^{k-1} - v^{k-1}(t^{k-1} - t^{k}),\quad 2\leq k \leq \ell,
\]
and
\[
x^{1} = x - v(t-t^{1}) = x - vt_{\mathbf{b}}.
\]
Or using rotational symmetry, we can also express
\[
x^{\ell} = Q_{\theta}^{\ell-1}x^{1},
\]
where $Q_{\theta}$ is operator (matrix) which means rotation(on the boundary of the disk) by $\theta$. $\theta$ is uniquely determined by its first (backward in time) bounce angle $v\cdotot n(x_{\mathbf{b}})$. \\
\begin{equation}gin{lemma} \label{der theta}
Here, $\theta$ is the angle at which $v$ is rotated to $v^1$. Moreover, $\theta>0$ is the same as the angle of rotation from $x^{k}$ to $x^{k+1}$ for $k=1,2,\cdotots,l-1$. Then, derivatives of $\theta$ with respect to $x$ and $v$ are
\begin{equation}gin{equation} \label{d_theta}
\nablala_x \theta =-\frac{2}{\sin \frac{\theta}{2}} Q_{-\frac{\theta}{2}}n(x^{1}), \quad \nablala_v \theta = 2\left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} - \frac{1}{\vert v \vert}\right) Q_{-\frac{\theta}{2}}n(x^{1}),
\end{equation}
provided $n(x^1)\cdotot v \neq0$.
\end{lemma}
\begin{equation}gin{proof}
From the definition of $\theta$,
\begin{equation}gin{equation} \label{theta}
\cos \left( \frac{\partiali}{2} - \frac{\theta}{2} \right) = \sin \left( \frac{\theta}{2} \right) = - \left[ n(x^1) \cdotot \frac{v}{\vert v \vert} \right].
\end{equation}
Thus, taking $\nablala_x$ yields
\begin{equation}gin{align*}
\frac{1}{2} \cos \frac{\theta}{2} \nablala_x \theta= -\frac{v}{\vert v \vert} \nablala_x \left( n(x^1)\right)=-\frac{v}{\vert v \vert} \left( I - \frac{v \otimes n(x^1)}{v \cdotot n(x^1)} \right)=-\frac{v}{\vert v \vert}+ \frac{\vert v \vert}{v\cdotot n(x^1)} n(x^1),
\end{align*}
where we used the product rule in Lemma \e^{\frac 12}f{matrix notation} and \eqref{normal} in Lemma \e^{\frac 12}f{d_n}. Note that rotating an angle $\partialhi=\frac{\partiali}{2}-\frac{\theta}{2}>0$ on a normal vector $n(x^1)$ gives the vector $- \frac{v}{\vert v \vert}$. In other words, it holds that
\begin{equation}gin{equation} \label{v_n}
-\frac{v}{\vert v \vert} = Q_{\partialhi} n (x^1),
\end{equation}
where $Q_{\partialhi}= \begin{equation}gin{bmatrix} \cos \partialhi & -\sin \partialhi \\ \sin \partialhi & \cos \partialhi \end{bmatrix} =\begin{equation}gin{bmatrix} \sin \frac{\theta}{2} & -\cos \frac{\theta}{2} \\ \cos \frac{\theta}{2} & \sin \frac{\theta}{2} \end{bmatrix}$. Thus,
\begin{equation}gin{align*}
\nablala_x \theta &= \frac{2}{\cos \frac{\theta}{2}} \left( Q_{\partialhi} - \frac{1}{\sin \frac{\theta}{2}} I \right) n(x^1)=\frac{2}{\cos \frac{\theta}{2}\sin \frac{\theta}{2}} \begin{equation}gin{bmatrix} \sin^2 \frac{\theta}{2} -1 & -\cos \frac{\theta}{2} \sin \frac{\theta}{2} \\ \sin\frac{\theta}{2}\cos \frac{\theta}{2}& \sin^2 \frac{\theta}{2} -1\end{bmatrix}n(x^1) \\
&= -\frac{2}{\sin \frac{\theta}{2}} \begin{equation}gin{bmatrix} \cos \frac{\theta}{2} & \sin \frac{\theta}{2} \\ -\sin \frac{\theta}{2} & \cos \frac{\theta}{2} \end{bmatrix}n(x^1) = -\frac{2}{\sin \frac{\theta}{2}} Q_{-\frac{\theta}{2}}n(x^{1}).
\end{align*}
Similarly, taking the derivative $\nablala_v$ of both sides in \eqref{theta}:
\begin{equation}gin{align*}
\frac{1}{2} \cos \frac{\theta}{2} \nablala_v \theta=-\frac{v}{\vert v \vert} \nablala_v \left( n(x^1)\right) -n(x^1) \left( \frac{1}{\vert v \vert} I- \frac{v \otimes v}{\vert v \vert^3} \right)&=t_{\mathbf{b}}\frac{v}{\vert v \vert} \begin{equation}ig(I - \frac{v\otimes n(x^1)}{v\cdotot n(x^1)} \begin{equation}ig)-n(x^1) \left( \frac{1}{\vert v \vert} I- \frac{v \otimes v}{\vert v \vert^3} \right)\\
&=t_{\mathbf{b}} \frac{v}{\vert v \vert} -t_{\mathbf{b}} \frac{ \vert v \vert}{v\cdotot n(x^1)} n (x^1) - \frac{1}{\vert v \vert} n(x^1) + \frac{v \cdotot n(x^1)}{\vert v \vert^2} \frac{v}{\vert v \vert},
\end{align*}
where we used the product rule in Lemma \e^{\frac 12}f{matrix notation} and \eqref{normal} in Lemma \e^{\frac 12}f{d_n}. From \eqref{v_n},
\begin{equation}gin{align*}
\nablala_v \theta &= \frac{2}{\cos \frac{\theta}{2} \sin \frac{\theta}{2}} \left( -t_{\mathbf{b}}\sin\frac{\theta}{2} \left[Q_{\partialhi}-\frac{1}{\sin \frac{\theta}{2}}I \right]n(x^1)+\frac{\sin^2\frac{\theta}{2}}{\vert v \vert} \left[ Q_{\partialhi} - \frac{1}{\sin \frac{\theta}{2}} I\right]n(x^1) \right)\\
&=\frac{2 t_{\mathbf{b}}}{\sin \frac{\theta}{2}} \begin{equation}gin{bmatrix} \cos \frac{\theta}{2}& \sin \frac{\theta}{2} \\ -\sin \frac{\theta}{2} & \cos \frac{\theta}{2} \end{bmatrix}n(x^1) -\frac{2}{\vert v \vert} \begin{equation}gin{bmatrix} \cos \frac{\theta}{2} & \sin\frac{\theta}{2} \\ -\sin\frac{\theta}{2} & \cos \frac{\theta}{2} \end{bmatrix} n(x^1)\\
&=2\left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} - \frac{1}{\vert v \vert}\right) \begin{equation}gin{bmatrix} \cos \frac{\theta}{2} & \sin\frac{\theta}{2} \\ -\sin\frac{\theta}{2} & \cos \frac{\theta}{2} \end{bmatrix} n(x^1)
= 2\left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} - \frac{1}{\vert v \vert}\right) Q_{-\frac{\theta}{2}}n(x^{1}).
\end{align*}
\end{proof}
| 2,299 | 129,481 |
en
|
train
|
0.4976.41
|
\begin{equation}gin{lemma} \label{X,V}
Let $(t,x,v) \in \mathbb{R}_+\times \Omega\times \mathbb{R}^2$. The specular characteristics $X(0;t,x,v)$ and $V(0;t,x,v)$ are defined in Definition \e^{\frac 12}f{notation}. Whenever $n(x^1)\cdotot v\neq0$, we have derivatives of the characteristics $X(0;t,x,v)$ and $V(0;t,x,v)$:
\begin{equation}gin{align} \label{n_x,v}
\begin{equation}gin{split}
\nablala_x X(0;t,x,v) &= Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdotot n(x^1)} \right) +t^l l Q_{l\theta-\frac{\partiali}{2}} \left( v \otimes \nablala_x \theta \right) - \frac{1}{\vert v \vert \sin \frac{\theta}{2}}Q_\theta^l \left(v \otimes n(x^1)\right) \\
&\quad -\frac{\vert v \vert(t-t_{\mathbf{b}}-t^l)}{2}Q_{(l-\frac{1}{2})\theta -\partiali} \left(n(x^1) \otimes \nablala_x \theta \right), \\
\nablala_v X(0;t,x,v)&=-t_{\mathbf{b}} Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdotot n(x^1)} \right) -t^l Q_\theta ^l+t^l l Q_{l\theta-\frac{\partiali}{2}} \left( v \otimes \nablala_v \theta \right)+\frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} Q_{\theta}^l \left(v \otimes n(x^1)\right) \\
&\quad - \frac{2(l-1)\sin\frac{\theta}{2}}{\vert v \vert^3} Q_\theta^l \left(v \otimes v\right) -\frac{\vert v \vert(t-t_{\mathbf{b}}-t^l)}{2}Q_{(l-\frac{1}{2})\theta - \partiali}\left(n(x^1) \otimes \nablala_v \theta\right), \\
\nablala_x V(0;t,x,v)&= -lQ_{l\theta-\frac{\partiali}{2}} \left( v\otimes \nablala_x \theta \right),\\
\nablala_v V(0;t,x,v)&= Q_{\theta} ^l -lQ_{l\theta-\frac{\partiali}{2}} \left( v\otimes \nablala_v \theta \right),
\end{split}
\end{align}
where $\theta$ is the angle given in Lemma \e^{\frac 12}f{der theta}, $t_{\mathbf{b}}$ is the backward exit time defined in Definition \e^{\frac 12}f{notation}, $l$ is the bouncing number, and $Q_\theta$ is a rotation matrix by $\theta$.
\end{lemma}
\begin{equation}gin{proof}
Recall
\begin{equation}gin{align*}
X(0;t,x,v) = x^l - v^l t^l , \quad V(0;t,x,v) = v^l.
\end{align*}
Using the rotation matrix $Q_\theta$, $x^l$ and $v^l$ can be expressed by
\begin{equation}gin{align} \label{x,v_l}
x^l = Q_{\theta}^{l-1} x^1, \quad v^l = Q_{\theta}^l v.
\end{align}
By the chain rule,
\begin{equation}gin{align*}
\frac{\partialartial{(X(0;t,x,v),V(0;t,x,v)})}{\partialartial{(x,v)}}= \frac{\partialartial{(X(0;t,x,v),V(0;t,x,v))}}{\partialartial{(t^l,x^l,v^l)}} \frac{\partialartial(t^l,x^l,v^l)}{\partialartial(x,v)}= \begin{equation}gin{bmatrix} -v^l & I & -t^l I \\ \textbf{0}_{2\times 1} & \textbf{0}_{2 \times 2} & I\end{bmatrix} \begin{equation}gin{bmatrix} \nablala_x t^l & \nablala_v t^l \\ \nablala_x x^l & \nablala_v x^l \\ \nablala_x v^l & \nablala_v v^l \end{bmatrix},
\end{align*}
where $I$ is a $2\times 2$ identity matrix. For the derivative of $X(0;t,x,v),V(0;t,x,v)$, it is necessary to find the derivative of $t^l,x^l,$ and $v^l$. Using the expression \eqref{x,v_l} and \eqref{d_matrix} in Lemma \e^{\frac 12}f{matrix notation}, we derive
\begin{equation}gin{align*}
\nablala_x x^l &= \nablala_x \left[ Q_\theta ^{l-1} x^1 \right]=Q_{\theta}^{l-1} \nablala_x x^1 -(l-1)\left(\begin{equation}gin{bmatrix} \sin(l-1)\theta & \cos(l-1)\theta \\ - \cos (l-1)\theta & \sin(l-1)\theta \end{bmatrix} x^1\right) \otimes \nablala_x \theta\\
&\hspace{.3cm} \qquad \qquad \qquad =Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdotot n(x^1)} \right) -(l-1)\left(\begin{equation}gin{bmatrix} \sin(l-1)\theta & \cos(l-1)\theta \\ - \cos (l-1)\theta & \sin(l-1)\theta \end{bmatrix} x^1\right) \otimes \nablala_x \theta,\\
\nablala_v x^l &= \nablala_v \left[ Q_\theta ^{l-1} x^1 \right]=Q_{\theta}^{l-1} \nablala_v x^1 -(l-1)\left(\begin{equation}gin{bmatrix} \sin(l-1)\theta & \cos(l-1)\theta \\ - \cos (l-1)\theta & \sin(l-1)\theta \end{bmatrix} x^1\right) \otimes \nablala_v \theta\\
&\hspace{.3cm} \qquad \qquad \qquad =-t_{\mathbf{b}} Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdotot n(x^1)} \right) -(l-1)\left(\begin{equation}gin{bmatrix} \sin(l-1)\theta & \cos(l-1)\theta \\ - \cos (l-1)\theta & \sin(l-1)\theta \end{bmatrix} x^1\right) \otimes \nablala_v \theta,\\
\nablala_x v^l &= \nablala_x \left[ Q_\theta^l v \right]= -l \left( \begin{equation}gin{bmatrix} \sin l \theta & \cos l \theta \\ -\cos l \theta & \sin l \theta \end{bmatrix} v\right) \otimes \nablala_x \theta, \\
\nablala_v v^l &= \nablala_v \left [ Q_\theta^l v \right]= Q_{\theta} ^l -l \left( \begin{equation}gin{bmatrix} \sin l \theta & \cos l \theta \\ -\cos l \theta & \sin l \theta \end{bmatrix} v\right) \otimes \nablala_v \theta.
\end{align*}
For the derivative of $t^l$, we rewrite $t^l$ as
\begin{equation}gin{align*}
t^l = t-(t-t^1) - \sum_{k=1}^{l-1}(t^k-t^{k+1})= t- t_{\mathbf{b}} -\sum_{k=1}^{l-1} (t^k-t^{k+1}).
\end{align*}
Since $\mathrm{d}isplaystyle t^k-t^{k+1}=\frac{2\sin\frac{\theta}{2}}{\vert v \vert}$ for all $k=1,2,\mathrm{d}ots,l-1$, it holds that
\begin{equation}gin{align} \label{t_ell}
t^l = t-t_{\mathbf{b}}- \frac{2(l-1)\sin \frac{\theta}{2}}{\vert v \vert}, \quad l-1 = \frac{\vert v \vert}{ 2 \sin \frac{\theta}{2}} \left(t-t_{\mathbf{b}} -t^l\right).
\end{align}
Taking the derivative of $t^l$ with respect to $x,v$
\begin{equation}gin{align} \label{nabla x,v t_ell}
\begin{equation}gin{split}
\nablala_x t^l &= -\nablala_x t_{\mathbf{b}} -\frac{(l-1) \cos \frac{\theta}{2}}{\vert v \vert} \nablala_x \theta=-\frac{n(x^1)}{v \cdotot n(x^1)}-\frac{(l-1) \cos \frac{\theta}{2}}{\vert v \vert} \nablala_x \theta=\frac{1}{\vert v \vert \sin\frac{\theta}{2}} n(x^1) -\frac{(l-1) \cos \frac{\theta}{2}}{\vert v \vert} \nablala_x \theta, \\
\nablala_v t^l &= -\nablala_v t_{\mathbf{b}} + \frac{2(l-1)\sin \frac{\theta}{2}}{\vert v \vert^3} v -\frac{(l-1) \cos \frac{\theta}{2}}{\vert v \vert} \nablala_v \theta=t_{\mathbf{b}} \frac{n(x^1)}{v \cdotot n(x^1)} +\frac{2(l-1)\sin \frac{\theta}{2}}{\vert v \vert^3} v -\frac{(l-1) \cos \frac{\theta}{2}}{\vert v \vert} \nablala_v \theta\\
&\hspace{7.1cm}=-\frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} n(x^1) +\frac{2(l-1)\sin \frac{\theta}{2}}{\vert v \vert^3} v -\frac{(l-1) \cos \frac{\theta}{2}}{\vert v \vert} \nablala_v \theta.
\end{split}
\end{align}
Also note that, from \eqref{v_n} and \eqref{t_ell}, we have
\begin{equation}gin{equation} \label{cancel}
\begin{equation}gin{split}
&-(l-1)Q_{(l-1)\theta-\frac{\partiali}{2}} \left(x^1 \otimes \nablala \theta\right) +\frac{(l-1)\cos\frac{\theta}{2}}{\vert v \vert} Q_{\theta}^l \left(v \otimes \nablala \theta\right) \\
&= -(l-1)\left(Q_{(l-1)\theta -\frac{\partiali}{2}} +\cos \frac{\theta}{2} Q_{l\theta}Q_{\frac{\partiali}{2}-\frac{\theta}{2}}\right) \left(n(x^1) \otimes \nablala \theta \right) \\
&= - (l-1) Q_{(l-1)\theta -\frac{\partiali}{2}} \begin{equation}gin{bmatrix} \sin^2 \frac{\theta}{2} & \sin \frac{\theta}{2} \cos \frac{\theta}{2} \\ -\sin \frac{\theta}{2} \cos \frac{\theta}{2} & \sin^2 \frac{\theta}{2} \end{bmatrix} \left(n(x^1) \otimes \nablala \theta \right) \\
&= -\frac{\vert v \vert(t-t_{\mathbf{b}}-t^l)}{2}Q_{(l-\frac{1}{2})\theta -\partiali} \left(n(x^1) \otimes \nablala \theta \right).
\end{split}
\end{equation}
Hence, using \eqref{cancel} and $x^{1}=n(x^{1})$,
\begin{equation}gin{align*}
\begin{equation}gin{split}
\nablala_x X(0;t,x,v) &= \nablala_x x^l - t^l \nablala_x v^l -v^l \otimes \nablala_x t^l\\
&= Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdotot n(x^1)} \right) -(l-1)\left(\begin{equation}gin{bmatrix} \sin(l-1)\theta & \cos(l-1)\theta \\ - \cos (l-1)\theta & \sin(l-1)\theta \end{bmatrix} x^1\right) \otimes \nablala_x \theta \\
& \quad +t^l l \left( \begin{equation}gin{bmatrix} \sin l \theta & \cos l \theta \\ -\cos l \theta & \sin l \theta \end{bmatrix} v\right) \otimes \nablala_x \theta-\frac{1}{\vert v \vert \sin \frac{\theta}{2}}Q_\theta^l \left(v \otimes n(x^1)\right) +\frac{(l-1)\cos\frac{\theta}{2}}{\vert v \vert} Q_{\theta}^l \left(v \otimes \nablala_x \theta\right) \\
&=Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdotot n(x^1)} \right) +t^l l Q_{l\theta-\frac{\partiali}{2}} \left( v \otimes \nablala_x \theta \right) - \frac{1}{\vert v \vert \sin \frac{\theta}{2}}Q_\theta^l \left(v \otimes n(x^1)\right) \\
&\quad -(l-1)Q_{(l-1)\theta-\frac{\partiali}{2}} \left(x^1 \otimes \nablala_x \theta\right) +\frac{(l-1)\cos\frac{\theta}{2}}{\vert v \vert} Q_{\theta}^l \left(v \otimes \nablala_x \theta\right) \\
| 3,867 | 129,481 |
en
|
train
|
0.4976.42
|
&=Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdotot n(x^1)} \right) +t^l l Q_{l\theta-\frac{\partiali}{2}} \left( v \otimes \nablala_x \theta \right) - \frac{1}{\vert v \vert \sin \frac{\theta}{2}}Q_\theta^l \left(v \otimes n(x^1)\right) \\
&\quad -\frac{\vert v \vert(t-t_{\mathbf{b}}-t^l)}{2}Q_{(l-\frac{1}{2})\theta -\partiali} \left(n(x^1) \otimes \nablala_x \theta \right), \\
\nablala_v X(0;t,x,v)&=\nablala_v x^l -t^l \nablala_v v^l-v^l \otimes \nablala_v t^l\\
&=-t_{\mathbf{b}} Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdotot n(x^1)} \right) -(l-1)\left(\begin{equation}gin{bmatrix} \sin(l-1)\theta & \cos(l-1)\theta \\ - \cos (l-1)\theta & \sin(l-1)\theta \end{bmatrix} x^1\right) \otimes \nablala_v \theta \\
&\quad -t^l Q_\theta ^l+t^l l \left( \begin{equation}gin{bmatrix} \sin l \theta & \cos l \theta \\ -\cos l \theta & \sin l \theta \end{bmatrix} v\right) \otimes \nablala_v \theta\\
&\quad +\frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} Q_{\theta}^l \left(v \otimes n(x^1)\right) - \frac{2(l-1)\sin\frac{\theta}{2}}{\vert v \vert^3} Q_\theta^l \left(v \otimes v\right)+ \frac{(l-1)\cos \frac{\theta}{2}}{\vert v \vert} Q_\theta^l\left( v \otimes \nablala_v \theta \right),\\
&= -t_{\mathbf{b}} Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdotot n(x^1)} \right) -t^l Q_\theta ^l+t^l l Q_{l\theta-\frac{\partiali}{2}} \left( v \otimes \nablala_v \theta \right)
+\frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} Q_{\theta}^l \left(v \otimes n(x^1)\right)\\
&\quad - \frac{2(l-1)\sin\frac{\theta}{2}}{\vert v \vert^3} Q_\theta^l \left(v \otimes v\right) -(l-1)Q_{(l-1)\theta-\frac{\partiali}{2}}\left(x^1 \otimes \nablala_v \theta \right) + \frac{(l-1)\cos \frac{\theta}{2}}{\vert v \vert} Q_\theta^l\left( v \otimes \nablala_v \theta \right)\\
&=-t_{\mathbf{b}} Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x^1)}{v\cdotot n(x^1)} \right) - t^l Q_\theta ^l+t^l l Q_{l\theta-\frac{\partiali}{2}} \left( v \otimes \nablala_v \theta \right)+\frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} Q_{\theta}^l \left(v \otimes n(x^1)\right) \\
&\quad - \frac{2(l-1)\sin\frac{\theta}{2}}{\vert v \vert^3} Q_\theta^l \left(v \otimes v\right)
-\frac{\vert v \vert(t-t_{\mathbf{b}}-t^l)}{2}Q_{(l-\frac{1}{2})\theta - \partiali}\left(n(x^1) \otimes \nablala_v \theta\right) ,\\
\nablala_x V(0;t,x,v)&=\nablala_x v^l = -l \left( \begin{equation}gin{bmatrix} \sin l \theta & \cos l \theta \\ -\cos l \theta & \sin l \theta \end{bmatrix} v\right) \otimes \nablala_x \theta=-lQ_{l\theta-\frac{\partiali}{2}} \left( v\otimes \nablala_x \theta \right),\\
\nablala_v V(0;t,x,v)&= \nablala_v v^l = Q_{\theta} ^l -l \left( \begin{equation}gin{bmatrix} \sin l \theta & \cos l \theta \\ -\cos l \theta & \sin l \theta \end{bmatrix} v\right) \otimes \nablala_v \theta=Q_{\theta}^l -l Q_{l \theta-\frac{\partiali}{2}} \left( v\otimes \nablala_v \theta \right).
\end{split}
\end{align*}
\end{proof}
| 1,376 | 129,481 |
en
|
train
|
0.4976.43
|
\begin{equation}gin{lemma}
The exit backward time $t_{\mathbf{b}}$ and the $l$-th bouncing backward time $t^l$ are defined in Definition \e^{\frac 12}f{notation}. Then, it holds that
\begin{equation}gin{align}\label{tb esti}
t_{\mathbf{b}} \leq \frac{2\sin \frac{\theta}{2}}{ \vert v \vert}, \quad t^l \leq \frac{2\sin \frac{\theta}{2}}{ \vert v \vert}.
\end{align}
\end{lemma}
\begin{equation}gin{proof}
Note that
\begin{equation}gin{align*}
t_{\mathbf{b}} = t-t^1= \frac{\vert x - x^1\vert}{ \vert v \vert }, \quad t^l = \frac{\vert x^l - X(0;t,x,v) \vert}{\vert v^l \vert}.
\end{align*}
Whenever $\theta$ is the angle at which $v$ is rotated to $v^1$, one obtains that
\begin{equation}gin{align*}
\vert x-x^1 \vert\leq 2 \sin \frac{\theta}{2}, \quad \vert x^l - X(0;t,x,v)\vert \leq 2 \sin \frac{\theta}{2}.
\end{align*}
From the above inequalities and $\vert v^l \vert = \vert v \vert $, we obtain
\begin{equation}gin{align*}
t_{\mathbf{b}} \leq \frac{2\sin \frac{\theta}{2}}{ \vert v \vert}, \quad t^l \leq \frac{2\sin \frac{\theta}{2}}{ \vert v \vert}.
\end{align*}
\end{proof}
\begin{equation}gin{lemma} \label{est der X,V}
Under the same assumption in Lemma \e^{\frac 12}f{X,V}, we have estimates of derivatives for the characteristics $X(0;t,x,v)$ and $V(0;t,x,v)$
\begin{equation}gin{align*}
\begin{equation}gin{split}
\vert \nablala_x X(0;t,x,v) \vert &\lesssim \frac{\vert v \vert} { \vert v \cdotot n(x_{\mathbf{b}}) \vert}\left( 1 + \vert v \vert t\right),\\
\vert \nablala_v X(0;t,x,v) \vert &\lesssim \frac{1} { \vert v \vert}\left( 1 + \vert v \vert t \right), \\
\vert \nablala_x V(0;t,x,v) \vert & \lesssim \frac{\vert v \vert^3}{ \vert v \cdotot n(x_{\mathbf{b}}) \vert^2} \left( 1+ \vert v \vert t \right), \\
\vert \nablala_v V(0;t,x,v) \vert & \lesssim \frac{\vert v \vert}{ \vert v \cdotot n(x_{\mathbf{b}}) \vert} \left( 1+ \vert v \vert t \right),
\end{split}
\end{align*}
where $n(x_{\mathbf{b}})$ is outward unit normal vector at $x_{\mathbf{b}} = x-t_{\mathbf{b}} v \in \partial\Omega$. \\
\end{lemma}
\begin{equation}gin{remark}
First-order derivatives of characteristics $(X,V)$ for general 3D convex domain were obtained in \cite{GKTT2017}. Lemma \e^{\frac 12}f{est der X,V} is simple version in 2D disk and its singular orders coincide with the results of \cite{GKTT2017}. \\
\end{remark}
\begin{equation}gin{proof}
By \eqref{n_x,v} in Lemma \e^{\frac 12}f{X,V}, we have
\begin{equation}gin{align*}
\nablala_x X(0;t,x,v) &= Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x_{\mathbf{b}})}{v\cdotot n(x_{\mathbf{b}})} \right)-\frac{\vert v \vert(t-t_{\mathbf{b}}-t^l)}{2}Q_{(l-\frac{1}{2})\theta -\partiali} \left(n(x_{\mathbf{b}}) \otimes \nablala_x \theta \right)\\
&\quad +t^l l Q_{l\theta-\frac{\partiali}{2}} \left( v \otimes \nablala_x \theta \right) - \frac{1}{\vert v \vert \sin \frac{\theta}{2}}Q_\theta^l \left(v \otimes n(x_{\mathbf{b}})\right).
\end{align*}
We define a matrix norm by
\begin{equation}gin{equation*}
\vert A \vert = \max _{i,j} a_{i,j},
\end{equation*}
where $a_{i,j}$ is the $(i,j)$ component of the matrix $A$. Then, we can easily check that
\begin{equation}gin{equation*}
\vert a \otimes b \vert \leq \vert a \vert \vert b \vert,
\end{equation*}
for any $a,b \in \mathbb{R}^n$. To find upper bound of $\nablala_x X(0;t,x,v)$, we only need to consider $\nablala_x \theta$ and $t^l \times l$. By \eqref{d_theta},\eqref{t_ell}, and \eqref{tb esti},
\begin{equation}gin{align} \label{e_1}
\vert \nablala_x \theta \vert =\left \vert \frac{2}{\sin \frac{\theta}{2}} Q_{-\frac{\theta}{2}} n(x_{\mathbf{b}}) \right \vert \leq \frac{2 \vert v \vert}{\vert v \cdotot n(x_{\mathbf{b}}) \vert}, \quad t^l \times l \leq \frac{2 \sin\frac{\theta}{2}}{\vert v \vert}\times \left(\frac{\vert v \vert}{2\sin \frac{\theta}{2}}t +1 \right) \leq t+\frac{2}{\vert v \vert}.
\end{align}
Using the above inequalities, we derive that
\begin{equation}gin{align*}
\vert \nablala_x X(0;t,x,v) \vert &\leq 1+ \frac{ \vert v \vert}{\vert v \cdotot n(x_{\mathbf{b}}) \vert} + \frac{ \vert v \vert t}{2} \vert \nablala_x \theta \vert + t^l l \vert v \vert \vert \nablala_x \theta \vert + \frac{1}{ \vert v \cdotot n(x_{\mathbf{b}}) \vert} \vert v \vert \\
&\leq 1+\frac{ \vert v \vert}{ \vert v \cdotot n(x_{\mathbf{b}}) \vert } + \frac{ \vert v \vert^2}{ \vert v \cdotot n(x_{\mathbf{b}}) \vert} t + \frac{ 2\vert v \vert^2}{\vert v \cdotot n(x_{\mathbf{b}}) \vert } \left( t + \frac{2}{\vert v \vert} \right) +\frac{\vert v \vert}{ \vert v \cdotot n(x_{\mathbf{b}}) \vert}\\
&\lesssim \frac{\vert v \vert}{\vert v \cdotot n(x_{\mathbf{b}}) \vert} \left( 1+ \vert v \vert t\right).
\end{align*}
\end{proof}
Recall the derivative $\nablala_v X(0;t,x,v)$ in Lemma \e^{\frac 12}f{X,V}.
\begin{equation}gin{align*}
\nablala_v X(0;t,x,v)&=-t_{\mathbf{b}} Q_{\theta}^{l-1}\left( I - \frac{v \otimes n(x_{\mathbf{b}})}{v\cdotot n(x_{\mathbf{b}})} \right)-\frac{\vert v \vert(t-t_{\mathbf{b}}-t^l)}{2}Q_{(l-\frac{1}{2})\theta - \partiali}\left(n(x_{\mathbf{b}}) \otimes \nablala_v \theta\right) \\
&\quad -t^l Q_\theta ^l+t^l l Q_{l\theta-\frac{\partiali}{2}} \left( v \otimes \nablala_v \theta \right)+\frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} Q_{\theta}^l \left(v \otimes n(x_{\mathbf{b}})\right)- \frac{2(l-1)\sin\frac{\theta}{2}}{\vert v \vert^3} Q_\theta^l \left(v \otimes v\right).
\end{align*}
Similarly, to estimate $\nablala_v X(0;t,x,v)$, we need to estimate $\nablala_v \theta$. From \eqref{d_theta} and \eqref{tb esti}, we directly compute
\begin{equation}gin{align} \label{e_2}
\vert \nablala_v \theta \vert = 2 \left \vert \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}}- \frac{1}{\vert v \vert} \right) Q_{-\frac{\theta}{2}}n(x_{\mathbf{b}}) \right \vert \leq \frac{6}{\vert v \vert}.
\end{align}
Thus,
\begin{equation}gin{align*}
\vert \nablala_v X(0;t,x,v) \vert &\leq t_{\mathbf{b}} \left( 1+ \frac{\vert v \vert}{ \vert v \cdotot n(x_{\mathbf{b}}) \vert} \right) +\frac{\vert v \vert t}{2} \vert \nablala_v \theta \vert + t^l + \left( t^l \times l \right) \vert v \vert \vert \nablala_v \theta \vert + \frac{ t_{\mathbf{b}}}{ \vert v \vert \sin \frac{\theta}{2}} \vert v \vert + \frac{2(l-1) \sin\frac{\theta}{2}}{\vert v \vert^3} \vert v \vert^2 \\
&\leq \frac{2\sin \frac{\theta}{2}}{\vert v \vert} \left( 1+ \frac{\vert v \vert}{ \vert v \cdotot n(x_{\mathbf{b}}) \vert} \right)+\frac{\vert v \vert t}{2} \times \frac{6}{\vert v \vert}+\frac{2\sin \frac{\theta}{2}}{\vert v \vert} + 6\left(t +\frac{2}{\vert v \vert} \right)\\
&\quad + \frac{2\sin \frac{\theta}{2}}{\vert v \vert} \frac{1}{\vert v \vert \sin \frac{\theta}{2}} \vert v \vert + \frac{(t-t_{\mathbf{b}}-t^l)}{\vert v \vert^2} \vert v \vert^2\\
&\lesssim \frac{1}{\vert v \vert} \left (1+ \vert v \vert t \right),
\end{align*}
where we used \eqref{tb esti} and \eqref{e_1}. For $\nablala_{x,v} V(0;t,x,v)$, using \eqref{n_x,v}, \eqref{t_ell}, \eqref{e_1}, and \eqref{e_2} gives
\begin{equation}gin{align*}
\vert \nablala_x V(0;t,x,v) \vert &= \left \vert -l Q_{l\theta-\frac{\partiali}{2}} \left( v \otimes \nablala_x \theta \right) \right \vert \leq \left(\frac{\vert v \vert}{\vert 2\sin \frac{\theta}{2}\vert} t +1\right) \vert v \vert \vert \nablala_x \theta \vert \lesssim \frac{ \vert v \vert^3}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^2}(1+ \vert v \vert t), \\
\vert \nablala_v V(0;t,x,v) \vert&= \left \vert Q_{\theta}^l - lQ_{l\theta -\frac{\partiali}{2}} (v \otimes \nablala_v \theta) \right \vert \leq 1+ \left(\frac{\vert v \vert}{\vert 2 \sin \frac{\theta}{2}\vert} t +1\right) \vert v \vert \vert \nablala_v \theta \vert\lesssim \frac{\vert v \vert}{\vert v \cdotot n(x_{\mathbf{b}}) \vert} (1+ \vert v \vert t).
\end{align*}
| 3,292 | 129,481 |
en
|
train
|
0.4976.44
|
\subsection{Second-order estimates of characteristics}
\begin{equation}gin{lemma}
$n(x_{\mathbf{b}})$ is outward unit normal vector at $x_{\mathbf{b}}\in \partialartial \Omega$. For $(x_{\mathbf{b}},v) \notin \gamma_0$, it follows that
\begin{equation}gin{equation} \label{est der n}
\vert \nablala_x [n(x_{\mathbf{b}})] \vert \lesssim \frac{\vert v \vert}{\vert v \cdotot n(x_{\mathbf{b}}) \vert}, \quad \vert \nablala_v [n(x_{\mathbf{b}})] \vert \lesssim \frac{1}{\vert v \vert}.
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
We denote the components of $v$ and $n(x_{\mathbf{b}})$ by $(v_1,v_2)$ and $(n_1,n_2)$. By \eqref{normal} in Lemma \e^{\frac 12}f{d_n} and \eqref{tb esti}, we have
\begin{equation}gin{align*}
\nablala_x[n(x_{\mathbf{b}})] &= I- \frac{v\otimes n(x_{\mathbf{b}})}{v\cdotot n(x_{\mathbf{b}})} = \frac{1}{v\cdotot n(x_{\mathbf{b}})}\begin{equation}gin{bmatrix}
v_2n_2 & -v_1n_2 \\
-v_2n_1 & v_1n_1
\end{bmatrix},\\
\nablala_v [n(x_{\mathbf{b}})]&=-t_{\mathbf{b}}\left(I- \frac{v\otimes n(x_{\mathbf{b}})}{v\cdotot n(x_{\mathbf{b}})}\right)=\frac{-t_{\mathbf{b}}}{v\cdotot n(x_{\mathbf{b}})} \begin{equation}gin{bmatrix}
v_2n_2 & -v_1n_2 \\
-v_2n_1 & v_1n_1
\end{bmatrix},
\end{align*}
which is further bounded by
\begin{equation}gin{equation*}
\vert \nablala_x [n(x_{\mathbf{b}})] \vert \lesssim \frac{\vert v \vert }{\vert v \cdotot n(x_{\mathbf{b}})\vert}, \quad \vert \nablala_v [n(x_{\mathbf{b}})] \vert \lesssim \frac{1}{\vert v \vert}.
\end{equation*}
\end{proof}
\begin{equation}gin{lemma}
The exit backward time $t_{\mathbf{b}}$ and the $l$-th bouncing backward time $t^l$ are defined in Definition \e^{\frac 12}f{notation}. Then, we have the following estimates
\begin{equation}gin{equation} \label{est der t_ell}
\begin{equation}gin{split}
&\vert \nablala_x t^1 \vert \lesssim \frac{1}{\vert v \vert \vert \sin\frac{\theta}{2} \vert}, \quad \vert \nablala_v t^1 \vert \lesssim \frac{1}{\vert v \vert^2},\\
&\vert \nablala_x t^l \vert \lesssim \frac{1}{\vert v \vert \sin^2\frac{\theta}{2}}(1+\vert v\vert t), \quad \vert \nablala_v t^l \vert \lesssim \frac{1}{\vert v \vert^2 \vert \sin \frac{\theta}{2} \vert }(1+\vert v \vert t),
\end{split}
\end{equation}
whenever $v\cdotot n(x_{\mathbf{b}}) \neq 0$.
\end{lemma}
\begin{equation}gin{proof}
Since $t^1 = t-t_b$, it follows from Lemma \e^{\frac 12}f{nabla xv b} that
\begin{equation}gin{align*}
\nablala_x t^1 = -\nablala_x t_{\mathbf{b}} = -\frac{n(x_{\mathbf{b}})}{v\cdotot n(x_{\mathbf{b}})}, \quad \nablala_v t^1 = -\nablala_v t_{\mathbf{b}} = t_{\mathbf{b}} \frac{n(x_{\mathbf{b}})}{v\cdotot n(x_{\mathbf{b}})}.
\end{align*}
Using the above and \eqref{tb esti} implies that
\begin{equation}gin{align*}
\vert \nablala_x t^1 \vert \lesssim \frac{1}{\vert v \vert \left \vert \sin \frac{\theta}{2}\right \vert}, \quad \vert \nablala_v t^1 \vert \lesssim \frac{1}{\vert v \vert^2}.
\end{align*}
By \eqref{nabla x,v t_ell} in the proof of Lemma \e^{\frac 12}f{X,V}, we have
\begin{equation}gin{align*}
\nablala_x t^l &= \frac{1}{\vert v \vert \sin \frac{\theta}{2}} n(x_{\mathbf{b}}) - \frac{(l-1)\cos \frac{\theta}{2}}{\vert v \vert} \nablala_x \theta, \\
\nablala_v t^l &= -\frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} n(x_{\mathbf{b}}) +\frac{2(l-1)\sin \frac{\theta}{2}}{\vert v \vert^3} v -\frac{(l-1)\cos \frac{\theta}{2}}{\vert v \vert} \nablala_v \theta.
\end{align*}
By \eqref{t_ell} in the proof of Lemma \e^{\frac 12}f{X,V}, the bouncing number $l$ can be bounded by
\begin{equation}gin{equation} \label{ell est}
l= 1+ \frac{\vert v \vert}{2\sin \frac{\theta}{2}} (t-t_{\mathbf{b}}-t^l) \leq 1+ \frac{\vert v \vert }{2\left \vert \sin \frac{\theta}{2}\right \vert} t \lesssim \frac{1}{\left \vert \sin\frac{\theta}{2}\right \vert} (1+\vert v \vert t).
\end{equation}
Then, from \eqref{tb esti}, \eqref{e_1},\eqref{e_2}, and \eqref{ell est}, one obtains that
\begin{equation}gin{align*}
\vert \nablala_x t^l \vert &\lesssim \frac{1}{\vert v \vert \vert \sin \frac{\theta}{2}\vert} + \frac{1}{\vert v \vert \sin^2 \frac{\theta}{2}}(1+\vert v \vert t)\lesssim \frac{1}{\vert v \vert \sin^2 \frac{\theta}{2}} (1+\vert v \vert t), \\
\vert\nablala_v t^l \vert &\lesssim \frac{1}{\vert v \vert^2}+\frac{1}{\vert v \vert^2}(1+\vert v \vert t) + \frac{1}{\vert v \vert^2}(1+\vert v \vert t) \lesssim \frac{1}{\vert v \vert^2} (1+\vert v \vert t).
\end{align*}
\end{proof}
| 1,877 | 129,481 |
en
|
train
|
0.4976.45
|
\begin{equation}gin{lemma} \label{2nd est der X,V}
The characteristics $X(0;t,x,v)$ and $V(0;t,x,v)$ are defined in Definition \e^{\frac 12}f{notation}. Under the same assumption in Lemma \e^{\frac 12}f{X,V}, we have estimates for the second derivatives of characteristics
\begin{equation}gin{equation*}
\begin{equation}gin{split}
&\vert \nablala_{xx} X(0;t,x,v) \vert \lesssim \frac{\vert v \vert^4}{\vert v \cdotot n(x_{\mathbf{b}})\vert^4}(1+\vert v \vert^2 t^2), \quad \vert \nablala_{vx} X(0;t,x,v) \vert \lesssim \frac{\vert v \vert^2}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^3}(1+\vert v \vert^2 t^2), \\
&\vert \nablala_{xv} X(0;t,x,v) \vert \lesssim \frac{\vert v \vert^2}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^3}(1+\vert v \vert^2 t^2), \quad \vert \nablala_{vv}X(0;t,x,v) \vert \lesssim \frac{1}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^2}(1+\vert v \vert^2 t^2),\\
&\vert \nablala_{xx} V(0;t,x,v) \vert \lesssim \frac{\vert v \vert^5}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^4} (1+\vert v \vert^2t^2), \quad \vert \nablala_{vx} V(0;t,x,v) \vert \lesssim \frac{\vert v \vert^3}{\vert v \cdotot n(x_{\mathbf{b}})\vert^3}(1+\vert v \vert^2 t^2),\\
&\vert \nablala_{xv} V(0;t,x,v) \vert \lesssim \frac{\vert v \vert^3}{\vert v \cdotot n(x_{\mathbf{b}})\vert^3}(1+\vert v \vert^2 t^2), \quad \vert \nablala_{vv} V(0;t,x,v) \vert \lesssim \frac{\vert v \vert}{\vert v \cdotot n(x_{\mathbf{b}})\vert^2} (1+\vert v \vert^2 t^2),
\end{split}
\end{equation*}
where $\vert \nablala_{xx,xv,vv} X(0;t,x,v) (or \; V(0;t,x,v))\vert $ is given by $\sup_{i,j} \vert \nablala_{ij}X(0;t,x,v) (or \; \nablala_{ij}V(0;t,x,v))\vert$ for $i,j \in \{x_1,x_2,v_1,v_2\}$.
\end{lemma}
\begin{equation}gin{proof}
We denote the components of $v$ and $n(x_{\mathbf{b}})$ by $(v_1,v_2)$ and $(n_1,n_2)$. To estimate $\vert \nablala_{xx} X(0;t,x,v)\vert$, we need to determine which component in the matrix $\nablala_x X(0;t,x,v)$ has the highest singularity $\frac{1}{\sin \frac{\theta}{2}}$ and travel length $(1+\vert v \vert t)$ order when we take the derivative with respect to $x$. In estimates \eqref{e_1},\eqref{est der n},\eqref{est der t_ell}, and \eqref{ell est}, we already checked singularity and travel length order for some terms. Considering these estimates, we get the highest singularity and travel length order in the $x$-derivative of the (1,1) component of the matrix $\nablala_x X(0;t,x,v)$. Hence, we only consider the (1,1) component among components in the matrix $\nablala_x X(0;t,x,v)$. In fact, from Lemma \e^{\frac 12}f{X,V}, the (1,1) component $[\nablala_x X(0;t,x,v)]_{(1,1)}$ of the matrix $\nablala_x X(0;t,x,v)$ is
\begin{equation}gin{align*}
&[\nablala_x X(0;t,x,v)]_{(1,1)}\\
&= \cos((l-1)\theta) \frac{v_2n_2}{v\cdotot n(x_{\mathbf{b}})} + \sin((l-1)\theta) \frac{v_2 n_1}{v\cdotot n(x_{\mathbf{b}})}\\
&\quad +\frac{\vert v \vert(t^1-t^l)}{\sin \frac{\theta}{2}}\left(-n_1^2 \cos ((l-\frac{1}{2})\theta) \cos \frac{\theta}{2} -n_1n_2 \cos ((l-\frac{1}{2})\theta) \sin \frac{\theta}{2}+n_1n_2 \sin((l-\frac{1}{2})\theta)\cos \frac{\theta}{2} \right. \\
&\left. \qquad \qquad \qquad \qquad +n_2^2 \sin ((l-\frac{1}{2})\theta) \sin \frac{\theta}{2} \right)\\
&\quad-\frac{2t^l l}{\sin \frac{\theta}{2}} \left( v_1n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2n_1 \cos l\theta \cos \frac{\theta}{2} +v_2n_2 \cos l\theta \sin \frac{\theta}{2}\right)\\
&\quad -\frac{1}{\vert v \vert \sin \frac{\theta}{2}}\left( v_1 n_1 \cos l\theta -v_2n_1 \sin l\theta\right)\\
&\lesssim \frac{\vert v \vert}{\vert v \cdotot n(x_{\mathbf{b}}) \vert} + \frac{1}{\vert \sin \frac{\theta}{2} \vert} (1+\vert v \vert t) +\frac{1}{\vert \sin \frac{\theta}{2}\vert} \lesssim \frac{\vert v \vert }{\vert v \cdotot n(x_{\mathbf{b}}) \vert}(1+\vert v \vert t) ,
\end{align*}
where the first inequality comes from \eqref{tb esti}, \eqref{ell est}, and
\begin{equation}gin{equation*}
t^1-t^l= \frac{2(l-1)\sin\frac{\theta}{2}}{\vert v \vert} \lesssim \frac{1}{\vert v \vert}(1+\vert v \vert t).
\end{equation*}
Similarly, the $(1,1)$ components of matrices $\nablala_v X(0;t,x,v), \nablala_x V(0;t,x,v)$, and $\nablala_v V(0;t,x,v)$ satisfy inequalities in Lemma \e^{\frac 12}f{est der X,V}. Similar as estimate $\vert \nablala_{xx} X(0;t,x,v)\vert$, we only consider $(1,1)$ components of derivative matrices for $X(0;t,x,v)$ and $V(0;t,x,v)$ to get estimates. When we differentiate $[\nablala_x X(0;t,x,v)]_{(1,1)}$ with respect to $x$, the terms containing $\frac{t^l l}{\sin \frac{\theta}{2}}$ are main terms that increase the singularity $\frac{1}{\sin \frac{\theta}{2}}$ and travel length $(1+\vert v \vert t)$ order. $\frac{t^l l}{\sin \frac{\theta}{2}}$ has a singularity order 1 and travel length order 1 because
\begin{equation}gin{equation*}
\left \vert \frac{t^l l}{\sin \frac{\theta}{2}}\right \vert \lesssim \frac{1}{\vert \sin \frac{\theta}{2}\vert } \times \frac{\vert \sin \frac{\theta}{2}\vert }{\vert v \vert} \times \frac{1}{\vert \sin \frac{\theta}{2}\vert}(1+\vert v \vert t)=\frac{ \vert v \vert}{\vert v \cdotot n(x_{\mathbf{b}})\vert}(1+\vert v \vert t),
\end{equation*}
where we have used \eqref{tb esti} and \eqref{ell est}. On the other hand, if we take of the term $\frac{t^l l}{\sin \frac{\theta}{2}}$ with respect to $x$, the singularity and travel length order become $4$ and $2$ respectively:
\begin{equation}gin{align*}
\left \vert \nablala_x \left(\frac{t^l l}{\sin \frac{\theta}{2}}\right)\right \vert =\left \vert \frac{l}{\sin \frac{\theta}{2}} \nablala_x t^l -\frac{t^l l\cos\frac{\theta}{2}}{2\sin^2 \frac{\theta}{2}} \nablala_x \theta\right \vert &\lesssim \frac{1}{\vert v \vert \sin^4 \frac{\theta}{2}}(1+\vert v \vert^2t^2) + \frac{1}{\vert v \vert \vert \sin^3 \frac{\theta}{2}\vert}(1+\vert v \vert t)\\
&\lesssim \frac{\vert v \vert^3}{\vert v\cdotot n(x_{\mathbf{b}})\vert^4} (1+\vert v \vert ^2 t^2),
\end{align*}
where \eqref{tb esti}, \eqref{e_1}, \eqref{est der t_ell}, and \eqref{ell est} have been used. Hence, it suffices to estimate the following terms in $[\nablala_x X(0;t,x,v)]_{(1,1)}$
\begin{equation}gin{equation*}
-\frac{2t^l l}{\sin \frac{\theta}{2}} \left( v_1n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2n_1 \cos l\theta \cos \frac{\theta}{2} +v_2n_2 \cos l\theta \sin \frac{\theta}{2}\right):=I_{1},
\end{equation*}
to obtain estimate for $\vert \nablala_{xx} X(0;t,x,v) \vert$. Taking the $x$-derivative to the above terms, one obtains
\begin{equation}gin{align*}
\nablala_x I_{1} &= \left ( \frac{-2l\nablala_x t^l}{\sin \frac{\theta}{2}} +\frac{2t^l l\cos\frac{\theta}{2}\nablala_x \theta}{2\sin^2 \frac{\theta}{2}} \right )\begin{equation}ig( v_1n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2n_1 \cos l\theta \cos \frac{\theta}{2} +v_2n_2 \cos l\theta \sin \frac{\theta}{2}\begin{equation}ig)\\
&\quad -\frac{2t^l l}{\sin \frac{\theta}{2}} \left( v_1 \sin l \theta \cos \frac{\theta}{2}\nablala_x n_1 +lv_1 n_1 \cos l\theta \cos \frac{\theta}{2}\nablala_x \theta -\frac{1}{2} v_1 n_1 \sin l\theta \sin \frac{\theta}{2} \nablala_x \theta \right. \\
&\qquad \qquad \quad \left. + v_1 \sin l \theta \sin \frac{\theta}{2}\nablala_x n_2 +lv_1 n_2 \cos l\theta \sin \frac{\theta}{2}\nablala_x \theta +\frac{1}{2} v_1 n_2 \sin l\theta \cos \frac{\theta}{2} \nablala_x \theta \right. \\
&\qquad \qquad \quad \left. + v_2 \cos l\theta \cos \frac{\theta}{2}\nablala_x n_1 -lv_2 n_1 \sin l \theta \cos \frac{\theta}{2} \nablala_x \theta -\frac{1}{2} v_2n_1\cos l\theta \sin \frac{\theta}{2}\nablala_x \theta \right. \\
&\qquad \qquad \quad \left. + v_2 \cos l\theta \sin \frac{\theta}{2}\nablala_x n_2 -lv_2 n_2 \sin l\theta \sin \frac{\theta}{2} \nablala_x \theta + \frac{1}{2} v_2 n_2 \cos l \theta \cos \frac{\theta}{2}\nablala_x \theta \right).
\end{align*}
Using \eqref{tb esti},\eqref{e_1},\eqref{est der n},\eqref{est der t_ell}, and \eqref{ell est}, one can further bound the above as
\begin{equation}gin{align*}
\vert \nablala_x I_{1}\vert &\lesssim \frac{\vert v \vert^3}{\vert v \cdotot n(x_{\mathbf{b}})\vert^4} (1+\vert v \vert^2t^2) \times \vert v \vert + \frac{1}{\vert v \cdotot n(x_{\mathbf{b}})\vert}(1+\vert v \vert t) \times \left( \vert v \vert \vert \nablala_x n(x_{\mathbf{b}}) \vert + l\vert v \vert\vert \nablala_x \theta \vert\right )\\
&\lesssim \frac{\vert v \vert^4}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^4} (1+\vert v \vert^2 t^2).
\end{align*}
Therefore, we get
\begin{equation}gin{align*}
\vert \nablala_{xx} X(0;t,x,v) \vert \lesssim \frac{\vert v \vert^4}{\vert v\cdotot n(x_{\mathbf{b}}) \vert^4} (1+\vert v \vert^2t^2).
\end{align*}
For estimate of $\vert \nablala_{vx} X(0;t;x,v)\vert$, similar to the case $\vert \nablala_{xx} X(0;t,x,v) \vert$, we only consider terms $I_1$. By taking the $v$-derivative to $I_1$, we obtain
\begin{equation}gin{align*}
\nablala_v I_1&=\left ( \frac{-2l \nablala_v t^l}{\sin \frac{\theta}{2}} +\frac{2t^l l\cos\frac{\theta}{2}\nablala_v \theta}{2\sin^2 \frac{\theta}{2}} \right )\begin{equation}ig( v_1n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2n_1 \cos l\theta \cos \frac{\theta}{2} +v_2n_2 \cos l\theta \sin \frac{\theta}{2}\begin{equation}ig)\\
&\quad -\frac{2t^l l}{\sin \frac{\theta}{2}} \left( n_1 \sin l \theta \cos \frac{\theta}{2}\nablala_v v_1 + v_1 \sin l \theta \cos \frac{\theta}{2}\nablala_v n_1 +lv_1 n_1 \cos l\theta \cos \frac{\theta}{2}\nablala_v \theta -\frac{1}{2} v_1 n_1 \sin l\theta \sin \frac{\theta}{2} \nablala_v \theta \right. \\
&\qquad \qquad \quad \left. + n_2\sin l\theta \sin \frac{\theta}{2} \nablala_v v_1+ v_1\sin l \theta \sin \frac{\theta}{2} \nablala_v n_2 +lv_1 n_2 \cos l\theta \sin \frac{\theta}{2}\nablala_v \theta +\frac{1}{2} v_1 n_2 \sin l\theta \cos \frac{\theta}{2} \nablala_v \theta \right. \\
&\qquad \qquad \quad \left. + n_1 \cos l\theta \cos \frac{\theta}{2} \nablala_v v_2+v_2\cos l\theta \cos \frac{\theta}{2} \nablala_v n_1 -lv_2 n_1 \sin l \theta \cos \frac{\theta}{2} \nablala_v \theta -\frac{1}{2} v_2n_1\cos l\theta \sin \frac{\theta}{2}\nablala_v \theta \right. \\
&\qquad \qquad \quad \left. +n_2\cos l \theta \sin \frac{\theta}{2} \nablala_v v_2+ v_2\cos l\theta \sin \frac{\theta}{2} \nablala_v n_2 -lv_2 n_2 \sin l\theta \sin \frac{\theta}{2} \nablala_v \theta + \frac{1}{2} v_2 n_2 \cos l \theta \cos \frac{\theta}{2}\nablala_v \theta \right).
\end{align*}
Using \eqref{tb esti},\eqref{e_2},\eqref{est der n},\eqref{est der t_ell}, and \eqref{ell est} yields that
\begin{equation}gin{align*}
\vert \nablala_v I_1 \vert &\lesssim \left( \frac{1}{\vert v \vert^2 \vert \sin^3 \frac{\theta}{2}\vert} (1+\vert v \vert^2 t^2)+\frac{1}{\vert v \vert^2 \vert \sin ^2 \frac{\theta}{2}\vert}(1+\vert v \vert t) \right)\times \vert v \vert+\frac{1}{\vert v \cdotot n(x_{\mathbf{b}}) \vert} ( 1+ \vert v \vert \vert \nablala_v n(x_{\mathbf{b}}) \vert +l\vert v \vert \vert \nablala_v \theta\vert)\\
&\lesssim \frac{\vert v \vert^2}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^3} (1+\vert v \vert ^2 t^2).
\end{align*}
Hence, one obtains that
\begin{equation}gin{align*}
\vert \nablala_{vx} X(0;t,x,v) \vert \lesssim \frac{\vert v \vert^2}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^3} (1+\vert v \vert ^2 t^2).
\end{align*}
By Lemma \e^{\frac 12}f{X,V}, we write the $(1,1)$ component of $\nablala_v X(0;t,x,v)$:
\begin{equation}gin{align*}
&[\nablala_v X(0;t,x,v)]_{(1,1)}\\
&=-t_{\mathbf{b}} \left(\cos (l-1)\theta \frac{v_2n_2}{v\cdotot n(x_{\mathbf{b}})} +\sin (l-1)\theta \frac{v_2n_1}{v \cdotot n(x_{\mathbf{b}})} \right) -t^l \cos l\theta \\
&\quad +2lt^l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right) \left( v_1 n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2} +v_2 n_2 \cos l\theta \sin \frac{\theta}{2}\right) \\
&\quad + \frac{t_{\mathbf{b}}}{\vert v \vert \sin \frac{\theta}{2}} (v_1n_1\cos l\theta -v_2 n_1 \sin l \theta) -\frac{2(l-1)\sin \frac{\theta}{2}}{\vert v \vert^3}(v_1^2\cos l\theta - v_1v_2 \sin l \theta) \\
&\quad -\vert v \vert (t^1-t^l) \left(\frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}}-\frac{1}{\vert v \vert}\right) \left(-n_1^2 \cos (l-\frac{1}{2})\theta \cos \frac{\theta}{2} +n_1n_2 \sin (l-1)\theta +n_2^2 \sin (l-\frac{1}{2})\theta \sin \frac{\theta}{2}\right).
\end{align*}
Similar to $\nablala_x X(0;t,x,v)$, main terms in $\nablala_v X(0;t,x,v)$ are
\begin{equation}gin{align*}
2lt^l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right) \left( v_1 n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2} +v_2 n_2 \cos l\theta \sin \frac{\theta}{2}\right):=I_2.
\end{align*}
As we take derivative to $\nablala_v X(0;t,x,v)$ with respect to $x$ and $v$, $I_2$ mainly contributes to increase singularity and travel length order. Thus, we only differentiate terms $I_2$ to get estimate for $\vert \nablala_{xv} X(0;t,x,v) \vert $ and $\vert \nablala_{vv} X(0;t,x,v)\vert$. Firstly, taking $x$ derivative to $I_2$ gives
\begin{equation}gin{align*}
\nablala_x I_2 &= 2l \nablala_x t^l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right) \left( v_1 n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2} +v_2 n_2 \cos l\theta \sin \frac{\theta}{2}\right)\\
&\quad + 2lt^l \left( \frac{\nablala_x t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{t_{\mathbf{b}}\cos \frac{\theta}{2}\nablala_x \theta}{2\sin^2 \frac{\theta}{2}} \right)\begin{equation}ig( v_1 n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2} +v_2 n_2 \cos l\theta \sin \frac{\theta}{2}\begin{equation}ig)\\
&\quad +2lt^l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right)\left( v_1 \sin l\theta \cos \frac{\theta}{2} \nablala_xn_1+lv_1 n_1 \cos l\theta \cos \frac{\theta}{2} \nablala_x \theta -\frac{1}{2}v_1 n_1 \sin l \theta \sin \frac{\theta}{2} \nablala_x \theta \right. \\
&\qquad \qquad \qquad \qquad \qquad \quad +\left. v_1\sin l \theta \sin \frac{\theta}{2} \nablala_x n_2+lv_1n_2\cos l \theta \sin \frac{\theta}{2} \nablala_x \theta +\frac{1}{2} v_1 n_2 \sin l\theta \cos \frac{\theta}{2} \nablala_x \theta \right.\\
&\qquad \qquad \qquad \qquad \qquad \quad +\left. v_2\cos l\theta \cos \frac{\theta}{2}\nablala_x n_1 -lv_2n_1 \sin l \theta \cos \frac{\theta}{2} \nablala_x \theta -\frac{1}{2} v_2 n_1 \cos l\theta \sin \frac{\theta}{2} \nablala_x \theta \right. \\
&\qquad \qquad \qquad \qquad \qquad \quad +\left. v_2 \cos l\theta \sin \frac{\theta}{2} \nablala_x n_2 -lv_2n_2 \sin l \theta \sin \frac{\theta}{2} \nablala_x \theta +\frac{1}{2} v_2n_2 \cos l\theta \cos \frac{\theta}{2} \nablala_x \theta \right).
\end{align*}
Hence, it follows from \eqref{tb esti},\eqref{e_1},\eqref{est der n},\eqref{est der t_ell}, and \eqref{ell est} that
\begin{equation}gin{align*}
\vert \nablala_x I_2 \vert &\lesssim \frac{1}{\vert v \vert \vert \sin ^3 \frac{\theta}{2} \vert} (1+\vert v \vert^2 t^2) \times \frac{1}{\vert v \vert} \times \vert v \vert + \frac{1}{\vert v\vert}(1+\vert v \vert t) \times\frac{1}{\vert v \vert \sin^2 \frac{\theta}{2}} \times \vert v \vert \\
&\quad + \frac{1}{\vert v \vert}(1+\vert v \vert t)\times \frac{1}{\vert v \vert} \times \frac{\vert v \vert}{\sin ^2\frac{\theta}{2}} (1+\vert v \vert t)\\
&\lesssim \frac{\vert v \vert^2}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^3} (1+\vert v \vert^2 t^2),
\end{align*}
which yields $\vert \nablala_{xv} X(0;t,x,v) \vert$ estimate
\begin{equation}gin{align*}
\vert \nablala_{xv} X(0;t,x,v) \lesssim \frac{\vert v \vert^2}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^3} (1+\vert v \vert^2 t^2).
\end{align*}
Similarly, we consider $\nablala_v I_2$:
\begin{equation}gin{align*}
\nablala_v I_2 &= 2l \nablala_v t^l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right) \left( v_1 n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2} +v_2 n_2 \cos l\theta \sin \frac{\theta}{2}\right)\\
&\quad + 2lt^l \left( \frac{\nablala_v t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{t_{\mathbf{b}}\cos \frac{\theta}{2}}{2\sin^2 \frac{\theta}{2}} \nablala_v \theta+\frac{v}{\vert v \vert^3}\right)\left( v_1 n_1 \sin l\theta \cos \frac{\theta}{2} +v_1 n_2 \sin l\theta \sin \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2} \right.\\
&\left. \qquad \hspace{5.7cm} +v_2 n_2 \cos l\theta \sin \frac{\theta}{2}\right)\\
&\quad +2lt^l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right)\left( n_1 \sin l\theta \cos \frac{\theta}{2} \nablala_v v_1+v_1 \sin l\theta \cos \frac{\theta}{2} \nablala_vn_1+lv_1 n_1 \cos l\theta \cos \frac{\theta}{2} \nablala_v \theta \right.\\
&\qquad \qquad \qquad \qquad \qquad \quad\left. -\frac{1}{2}v_1 n_1 \sin l \theta \sin \frac{\theta}{2} \nablala_v \theta
+n_2 \sin l \theta \sin \frac{\theta}{2} \nablala_v v_1 +v_1\sin l \theta \sin \frac{\theta}{2} \nablala_v n_2 \right. \\
&\qquad \qquad \qquad \qquad \qquad \quad\left. +lv_1n_2\cos l \theta \sin \frac{\theta}{2} \nablala_v\theta +\frac{1}{2} v_1 n_2 \sin l\theta \cos \frac{\theta}{2} \nablala_v \theta +n_1\cos l\theta \cos \frac{\theta}{2} \nablala_v v_2 \right.\\
&\qquad \qquad \qquad \qquad \qquad \quad +\left. v_2\cos l\theta \cos \frac{\theta}{2}\nablala_v n_1 -lv_2n_1 \sin l \theta \cos \frac{\theta}{2} \nablala_v \theta -\frac{1}{2} v_2 n_1 \cos l\theta \sin \frac{\theta}{2} \nablala_v \theta \right. \\
&\qquad \qquad \qquad \qquad \qquad \quad +\left. n_2 \cos l\theta \sin \frac{\theta}{2} \nablala_v v_2+v_2 \cos l\theta \sin \frac{\theta}{2} \nablala_v n_2 -lv_2n_2 \sin l \theta \sin \frac{\theta}{2} \nablala_v \theta \right.\\
&\qquad \qquad \qquad \qquad \qquad \quad \left.+\frac{1}{2} v_2n_2 \cos l\theta \cos \frac{\theta}{2} \nablala_v \theta \right).
\end{align*}
By \eqref{tb esti},\eqref{e_2},\eqref{est der n},\eqref{est der t_ell}, and \eqref{ell est}, the above can be further bounded by
\begin{equation}gin{align*}
\vert \nablala_v I_2 \vert &\lesssim \frac{1}{\vert v \vert^2 \sin^2 \frac{\theta}{2}} (1+\vert v \vert^2 t^2) \times \frac{1}{\vert v \vert} \times \vert v \vert + \frac{1}{\vert v \vert}(1+\vert v \vert t)\times \frac{1}{\vert v \vert^2 \vert \sin \frac{\theta}{2}\vert} \times \vert v \vert +\frac{1}{\vert v \vert}(1+\vert v \vert t)\times \frac{1}{\vert v \vert \vert \sin \frac{\theta}{2} \vert} (1+\vert v \vert t)\\
&\lesssim \frac{1}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^2} (1+\vert v \vert^2 t^2) .
\end{align*}
Hence, $\vert \nablala_{vv} X(0;t,x,v)\vert$ is bounded by
\begin{equation}gin{align*}
\vert \nablala_{vv} X(0;t,x,v) \vert \lesssim \frac{1}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^2} (1+\vert v \vert^2 t^2).
\end{align*}
To get estimate for $\vert \nablala_{xx} V(0;t,x,v)\vert$ and $\vert \nablala_{vx} V(0;t,x,v)\vert$, we now consider $[\nablala_{x} V(0;t,x,v)]_{(1,1)}$:
\begin{equation}gin{align*}
[\nablala_x V(0;t,x,v)]_{(1,1)} = \frac{2l}{\sin \frac{\theta}{2}} \left(v_1n_1 \sin l \theta \cos \frac{\theta}{2} +v_1n_2 \sin l \theta \sin \frac{\theta}{2} +v_2n_1 \cos l\theta \cos \frac{\theta}{2} + v_2n_2 \cos l\theta \sin \frac{\theta}{2}\right),
\end{align*}
by Lemma \e^{\frac 12}f{X,V}. In $[\nablala_x V(0;t,x,v)]_{(1,1)}$, the main terms are
\begin{equation}gin{align*}
\frac{2l}{\sin \frac{\theta}{2}} v_1n_1 \sin l \theta \cos \frac{\theta}{2} \quad \textrm{and} \quad \frac{2l}{\sin \frac{\theta}{2}} v_2 n_1 \cos l\theta \cos \frac{\theta}{2},
\end{align*}
because these terms have the highest singularity order in $[\nablala_x V(0;t,x,v)]_{(1,1)}$. Thus, for $\vert \nablala_{xx}V(0;t,x,v) \vert$, we now take the $x$-derivative for main terms:
\begin{equation}gin{align*}
&\nablala_x \left(\frac{2l}{\sin \frac{\theta}{2}}\left(v_1n_1\sin l \theta \cos \frac{\theta}{2} + v_2n_1 \cos l \theta \cos \frac{\theta}{2}\right)\right)\\
&= -\frac{l\cos \frac{\theta}{2}}{\sin ^2 \frac{\theta}{2}}\nablala_x \theta \left(v_1n_1\sin l \theta \cos \frac{\theta}{2} + v_2n_1 \cos l \theta \cos \frac{\theta}{2}\right)\\
&\quad + \frac{2l}{\sin \frac{\theta}{2}} \left(v_1 \sin l \theta \cos \frac{\theta}{2} \nablala_x n_1 +lv_1 n_1 \cos l\theta \cos \frac{\theta}{2} \nablala_x \theta -\frac{1}{2} v_1 n_1 \sin l\theta \sin \frac{\theta}{2} \nablala_x \theta \right. \\
&\qquad \qquad \quad + \left. v_2\cos l \theta \cos \frac{\theta}{2} \nablala_x n_1 -lv_2 n_1 \sin l\theta \cos \frac{\theta}{2} \nablala_x \theta -\frac{1}{2} v_2n_1 \cos l\theta \sin \frac{\theta}{2} \nablala_x \theta \right)\\
&:=I_3.
\end{align*}
By \eqref{e_1},\eqref{est der n}, and \eqref{ell est}, $I_3$ can be further bounded by
\begin{equation}gin{align*}
\vert I_3 \vert \lesssim \frac{\vert v \vert}{\sin^4 \frac{\theta}{2}}(1+\vert v \vert t) + \frac{\vert v \vert }{ \sin^4\frac{\theta}{2}}(1+\vert v \vert^2 t^2)\lesssim \frac{\vert v \vert^5}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^4}(1+\vert v \vert^2 t^2),
\end{align*}
which implies that
\begin{equation}gin{align*}
\vert \nablala_{xx} V(0;t,x,v) \vert \lesssim \frac{\vert v \vert^5}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^4} (1+\vert v \vert^2t^2).
\end{align*}
Similarly, we firstly take the $v$-derivative for main terms in $[\nablala_x V(0;t,x,v)]_{(1,1)}$ and then estimate $v$-derivatives. Then, we deduce
\begin{equation}gin{align*}
\vert \nablala_{vx} V(0;t,x,v) \vert \lesssim \frac{ \vert v \vert^3}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^3} (1+ \vert v \vert^2 t^2),
\end{align*}
where we have used \eqref{e_2},\eqref{est der n}, and \eqref{ell est}. Lastly, it remains to estimate $\vert \nablala_{xv}V(0;t,x,v)\vert$ and $\vert \nablala_{vv} V(0;t,x,v)\vert$. Let us consider the $(1,1)$ component of $\nablala_v V(0;t,x,v)$:
\begin{equation}gin{align*}
[\nablala_v V(0;t,x,v)]_{(1,1)}&=\cos l\theta -2l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right) \left(v_1n_1 \sin l \theta \cos \frac{\theta}{2} +v_1 n_2 \sin l \theta \sin \frac{\theta}{2} \right.\\
&\left. \qquad \qquad \qquad \qquad \qquad \qquad \quad + v_2 n_1 \cos l\theta \cos \frac{\theta}{2} +v_2n_2 \cos l\theta \sin \frac{\theta}{2}\right),
\end{align*}
by Lemma \e^{\frac 12}f{X,V}. Similar to previous cases, main terms in $[\nablala_v V(0;t,x,v)]_{(1,1)}$ are
\begin{equation}gin{align*}
-2l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right)\left(v_1n_1 \sin l \theta \cos \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2}\right):=I_4,
\end{align*}
by the same reason. Taking the $x$-derivative for $I_4$, we get
\begin{equation}gin{align*}
\nablala_x I_4 &= -2l \left( \frac{\nablala_x t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{t_{\mathbf{b}} \cos \frac{\theta}{2}}{2\sin^2\frac{\theta}{2}} \nablala_x \theta\right)\left(v_1n_1 \sin l \theta \cos \frac{\theta}{2} +v_2 n_1 \cos l\theta \cos \frac{\theta}{2}\right)\\
&\quad -2l \left( \frac{t_{\mathbf{b}}}{\sin \frac{\theta}{2}} -\frac{1}{\vert v \vert}\right)\left(v_1 \sin l \theta \cos \frac{\theta}{2} \nablala_x n_1 + lv_1n_1 \cos l\theta \cos \frac{\theta}{2}\nablala_x \theta -\frac{1}{2} v_1n_1 \sin l \theta \sin \frac{\theta}{2} \nablala_x \theta \right. \\
& \qquad \qquad \qquad \qquad \qquad +\left. v_2 \cos l \theta \cos \frac{\theta}{2} \nablala_x n_1 -l v_2n_1 \sin l \theta \cos \frac{\theta}{2} \nablala_x \theta -\frac{1}{2} v_2 n_1 \cos l\theta \sin \frac{\theta}{2} \nablala_x \theta\right).
\end{align*}
Using \eqref{tb esti},\eqref{e_1},\eqref{est der n},\eqref{est der t_ell}, and \eqref{ell est}, one obtains that
\begin{equation}gin{align*}
\vert \nablala_x I_4 \vert \lesssim \frac{1}{\vert \sin\frac{\theta}{2}\vert } (1+\vert v \vert t)\times \frac{1}{\vert v \vert \sin^2 \frac{\theta}{2}} \times \vert v \vert +\frac{1}{\vert \sin \frac{\theta}{2} \vert}(1+\vert v \vert t) \times \frac{1}{\vert v \vert} \times \frac{\vert v \vert}{ \sin^2 \frac{\theta}{2}}(1+\vert v \vert t)\lesssim \frac{\vert v \vert^3}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^3} (1+\vert v \vert^2t^2).
\end{align*}
Hence, we get estimate for $\vert\nablala_{xv}V(0;t,x,v)\vert$
\begin{equation}gin{align*}
\vert \nablala_{xv} V(0;t,x,v) \vert \lesssim \frac{\vert v \vert^3}{\vert v \cdotot n(x_{\mathbf{b}}) \vert^3} (1+\vert v \vert^2 t^2).
\end{align*}
Similarly, we take the $v$-derivative to main terms $I_4$ and estimate $\nablala_v I_4$ to get $\vert\nablala_{vv} V(0;t,x,v)\vert$. From \eqref{tb esti},\eqref{e_2},\eqref{est der n}, \eqref{est der t_ell}, and \eqref{ell est}, we derive
\begin{equation}gin{align*}
\vert \nablala_{vv} V(0;t,x,v) \vert \lesssim \frac{\vert v \vert}{ \vert v \cdotot n(x_{\mathbf{b}}) \vert^2} (1+\vert v \vert^2 t^2).
\end{align*}
\end{proof}
| 10,932 | 129,481 |
en
|
train
|
0.4976.46
|
\subsection{Proof of Theorem \e^{\frac 12}f{thm 3}}
\begin{equation}gin{proof} [Proof of Theorem \e^{\frac 12}f{thm 3}]
\textit{Step 1} First, we prove $C^{1}$ estimate. Note that it is easy to derive
\begin{equation} \label{dt XV}
\partial_{t}X(0;t,x,v) = -v^{k},\quad \partial_{tt}X(0;t,x,v) = 0,\quad \partial_{t}V(0;t,x,v) = 0, \quad \partial_{tt}V(0;t,x,v) = 0,
\end{equation}
where we assumed $t^{k+1} < 0 < t^{k}$ for some integer $k$. For $i\in\{t,x,v\}$,
\begin{equation} \label{chain}
\nablala_{i}f(t,x,v) = \nablala_{x}f_0 \nablala_{i}X(0;t,x,v) + \nablala_{v}f_0 \nablala_{i} V(0;t,x,v).
\end{equation}
Hence using Lemma \e^{\frac 12}f{est der X,V} and \eqref{dt XV}, we obtain
\begin{equation}gin{equation*}
\begin{equation}gin{split}
|\partial_{t}f| &\lesssim \|f_0\|_{C^{1}}|v|, \\
|\nablala_{x}f| &\lesssim \|f_0\|_{C^{1}} \frac{|v|^{2}}{|v\cdotot n(x_{\mathbf{b}})|^{2}} \langle v \rangle (1 + |v|t), \\
|\nablala_{v}f| &\lesssim \|f_0\|_{C^{1}} \frac{1}{|v\cdotot n(x_{\mathbf{b}})|} \langle v \rangle (1 + |v|t), \\
\end{split}
\end{equation*}
where $x_{\mathbf{b}} = x_{\mathbf{b}}(x,v)$ and $\langle v \rangle := 1 + |v|$. So we obtain \eqref{C1 bound}. \\
\textit{Step 2} Now we compute second-order estimate. For $\nablala_{xx}f$, from \eqref{chain}, Lemma\e^{\frac 12}f{est der X,V}, and Lemma \e^{\frac 12}f{2nd est der X,V}, we obtain
\begin{equation}gin{equation*}
\begin{equation}gin{split}
|\nablala_{xx}f| &= |\nablala_{x} \big( \nablala_{x}f_0 \nablala_{x}X(0;t,x,v) + \nablala_{v}f_0 \nablala_{x} V(0;t,x,v) \big)| \\
&\lesssim \|f_0\|_{C^{1}} \big( |\nablala_{xx}X(0) | + |\nablala_{xx}V(0)| \big)
+ \|f_0\|_{C^{2}} \big( |\nablala_{x}X(0)| + |\nablala_{x}V(0)| \big)^{2} \\
&\lesssim \|f_0\|_{C^{2}} \frac{|v|^{4}}{|v\cdotot n(x_{\mathbf{b}})|^{4}} \langle v \rangle^{2} (1 + |v|t)^{2},
\end{split}
\end{equation*}
\begin{equation}gin{equation*}
\begin{equation}gin{split}
|\nablala_{vx}f| &= |\nablala_{v} \big( \nablala_{x}f_0 \nablala_{x}X(0;t,x,v) + \nablala_{v}f_0 \nablala_{x} V(0;t,x,v) \big)| \\
&\lesssim \|f_0\|_{C^{1}} \big( |\nablala_{vx}X(0) | + |\nablala_{vx}V(0)| \big)
+ \|f_0\|_{C^{2}} \big( |\nablala_{x}X(0)| + |\nablala_{x}V(0)| \big)\big( |\nablala_{v}X(0)| + |\nablala_{v}V(0)| \big) \\
&\lesssim \|f_0\|_{C^{2}} \frac{|v|^{2}}{|v\cdotot n(x_{\mathbf{b}})|^{3}} \langle v \rangle^{2} (1 + |v|t)^{2},
\end{split}
\end{equation*}
and
\begin{equation}gin{equation*}
\begin{equation}gin{split}
|\nablala_{vv}f| &= |\nablala_{v} \big( \nablala_{x}f_0 \nablala_{v}X(0;t,x,v) + \nablala_{v}f_0 \nablala_{v} V(0;t,x,v) \big)| \\
&\lesssim \|f_0\|_{C^{1}} \big( |\nablala_{vv}X(0) | + |\nablala_{vv}V(0)| \big)
+ \|f_0\|_{C^{2}} \big( |\nablala_{v}X(0)| + |\nablala_{v}V(0)| \big)^{2} \\
&\lesssim \|f_0\|_{C^{2}} \frac{1}{|v\cdotot n(x_{\mathbf{b}})|^{2}} \langle v \rangle^{2} (1 + |v|t)^{2},
\end{split}
\end{equation*}
where $|\nablala_{xx, vx, vv}X|$ means $\sup_{i,j,k}|\nablala_{ij}X_{k}(0;t,x,v)|$ for $i,j \in\{ x_{1}, x_{2}, v_{1}, v_{2}\}$ and $k \in \{1,2\}$. (Also similar for $\nablala_{ij}V$.) Combining above three estimates, we obtain \eqref{C2 bound}. Second derivative estimates which contain at least one $\partial_{t}$ also yield the same upper bound from \eqref{dt XV}. We omit the details. \\
\end{proof}
| 1,644 | 129,481 |
en
|
train
|
0.4976.47
|
\noindent{\bf Acknowledgments.}
The authors thank Haitao Wang for suggestion and fruitful discussion. Their research is supported by the National Research Foundation of Korea(NRF) grant funded by the Korean government(MSIT)(No. NRF-2019R1C1C1010915). DL is also supported by the POSCO Science Fellowship of POSCO TJ Park Foundation. The authors sincerely appreciate the anonymous referees for their valuable comments and suggestions on the paper.
\begin{equation}gin{thebibliography}{10}
\bibitem{AMSY2020}
R~Alonso, Y~Morimoto, W~Sun, and T~Yang.
\newblock De giorgi argument for weighted $ {L^2}\cap {L^\infty}$ solutions to
the non-cutoff boltzmann equation.
\newblock {\em arXiv preprint arXiv:2010.10065}, 2020.
\bibitem{CKLVPB}
Yunbai Cao, Chanwoo Kim, and Donghyun Lee.
\newblock Global strong solutions of the {V}lasov-{P}oisson-{B}oltzmann system
in bounded domains.
\newblock {\em Arch. Ration. Mech. Anal.}, 233(3):1027--1130, 2019.
\bibitem{DV}
L.~Desvillettes and C.~Villani.
\newblock On the trend to global equilibrium for spatially inhomogeneous
kinetic systems: the {B}oltzmann equation.
\newblock {\em Invent. Math.}, 159(2):245--316, 2005.
\bibitem{DHWY2017}
Renjun Duan, Feimin Huang, Yong Wang, and Tong Yang.
\newblock Global well-posedness of the {B}oltzmann equation with large
amplitude initial data.
\newblock {\em Arch. Ration. Mech. Anal.}, 225(1):375--424, 2017.
\bibitem{DHWZ2019}
Renjun Duan, Feimin Huang, Yong Wang, and Zhu Zhang.
\newblock Effects of soft interaction and non-isothermal boundary upon
long-time dynamics of rarefied gas.
\newblock {\em Arch. Ration. Mech. Anal.}, 234(2):925--1006, 2019.
\bibitem{DKL2020}
Renjun Duan, Gyounghun Ko, and Donghyun Lee.
\newblock The boltzmann equation with large-amplitude initial data and specular
reflection boundary condition.
\newblock {\em arXiv preprint arXiv:2011.01503}, 2020.
\bibitem{DLSS2021}
Renjun Duan, Shuangqian Liu, Shota Sakamoto, and Robert~M. Strain.
\newblock Global mild solutions of the {L}andau and non-cutoff {B}oltzmann
equations.
\newblock {\em Comm. Pure Appl. Math.}, 74(5):932--1020, 2021.
\bibitem{DW2019}
Renjun Duan and Yong Wang.
\newblock The {B}oltzmann equation with large-amplitude initial data in bounded
domains.
\newblock {\em Adv. Math.}, 343:36--109, 2019.
\bibitem{StrainJAMS}
Philip~T. Gressman and Robert~M. Strain.
\newblock Global classical solutions of the {B}oltzmann equation without
angular cut-off.
\newblock {\em J. Amer. Math. Soc.}, 24(3):771--847, 2011.
\bibitem{GKTT2016}
Y.~Guo, C.~Kim, D.~Tonon, and A.~Trescases.
\newblock B{V}-regularity of the {B}oltzmann equation in non-convex domains.
\newblock {\em Arch. Ration. Mech. Anal.}, 220(3):1045--1093, 2016.
\bibitem{GuoVPB}
Yan Guo.
\newblock The {V}lasov-{P}oisson-{B}oltzmann system near {M}axwellians.
\newblock {\em Comm. Pure Appl. Math.}, 55(9):1104--1135, 2002.
\bibitem{GuoVMB}
Yan Guo.
\newblock The {V}lasov-{M}axwell-{B}oltzmann system near {M}axwellians.
\newblock {\em Invent. Math.}, 153(3):593--630, 2003.
\bibitem{Guo10}
Yan Guo.
\newblock Decay and continuity of the {B}oltzmann equation in bounded domains.
\newblock {\em Arch. Ration. Mech. Anal.}, 197(3):713--809, 2010.
\bibitem{GHJO2020}
Yan Guo, Hyung~Ju Hwang, Jin~Woo Jang, and Zhimeng Ouyang.
\newblock The {L}andau equation with the specular reflection boundary
condition.
\newblock {\em Arch. Ration. Mech. Anal.}, 236(3):1389--1454, 2020.
\bibitem{GKTT2017}
Yan Guo, Chanwoo Kim, Daniela Tonon, and Ariane Trescases.
\newblock Regularity of the {B}oltzmann equation in convex domains.
\newblock {\em Invent. Math.}, 207(1):115--290, 2017.
\bibitem{CILS}
Cyril Imbert and Luis~Enrique Silvestre.
\newblock Global regularity estimates for the {B}oltzmann equation without
cut-off.
\newblock {\em J. Amer. Math. Soc.}, 35(3):625--703, 2022.
\bibitem{Kim2011}
Chanwoo Kim.
\newblock Formation and propagation of discontinuity for {B}oltzmann equation
in non-convex domains.
\newblock {\em Comm. Math. Phys.}, 308(3):641--701, 2011.
\bibitem{KimLee}
Chanwoo Kim and Donghyun Lee.
\newblock The {B}oltzmann equation with specular boundary condition in convex
domains.
\newblock {\em Comm. Pure Appl. Math.}, 71(3):411--504, 2018.
\bibitem{KimLeeNonconvex}
Chanwoo Kim and Donghyun Lee.
\newblock Decay of the {B}oltzmann equation with the specular boundary
condition in non-convex cylindrical domains.
\newblock {\em Arch. Ration. Mech. Anal.}, 230(1):49--123, 2018.
\bibitem{KimLee2021}
Chanwoo Kim and Donghyun Lee.
\newblock H\"older regularity of the boltzmann equation past an obstacle.
\newblock {\em arXiv preprint arXiv:2111.07558}, 2021.
\bibitem{KGH2020}
Jinoh Kim, Yan Guo, and Hyung~Ju Hwang.
\newblock An {$L^2$} to {$L^\infty$} framework for the {L}andau equation.
\newblock {\em Peking Math. J.}, 3(2):131--202, 2020.
\bibitem{KLP2022}
Gyounghun Ko, Donghyun Lee, and Kwanghyuk Park.
\newblock The large amplitude solution of the {B}oltzmann equation with soft
potential.
\newblock {\em J. Differential Equations}, 307:297--347, 2022.
\bibitem{LLWW2022}
Yu-Chu Lin, Ming-Jiea Lyu, Haitao Wang, and Kung-Chien Wu.
\newblock Space-time behavior of the solution to the {B}oltzmann equation with
soft potentials.
\newblock {\em J. Differential Equations}, 322:180--236, 2022.
\bibitem{LY2017}
Shuangqian Liu and Xiongfeng Yang.
\newblock The initial boundary value problem for the {B}oltzmann equation with
soft potential.
\newblock {\em Arch. Ration. Mech. Anal.}, 223(1):463--541, 2017.
\bibitem{LY2004}
Tai-Ping Liu and Shih-Hsien Yu.
\newblock The {G}reen's function and large-time behavior of solutions for the
one-dimensional {B}oltzmann equation.
\newblock {\em Comm. Pure Appl. Math.}, 57(12):1543--1608, 2004.
\bibitem{UY2006}
Seiji Ukai and Tong Yang.
\newblock The {B}oltzmann equation in the space {$L^2\cap L^\infty_\begin{equation}ta$}:
global and time-periodic solutions.
\newblock {\em Anal. Appl. (Singap.)}, 4(3):263--310, 2006.
\end{thebibliography}
\end{document}
| 2,607 | 129,481 |
en
|
train
|
0.4977.0
|
\begin{document}
\title{Mixed Spatial and Temporal Decompositions
for Large Scale Multistage Stochastic
Optimization Problems}
\begin{abstract}
We consider multistage stochastic optimization problems
involving multiple units. Each unit is a (small) control system.
Static constraints couple units at each stage.
We present a mix of spatial and temporal decompositions
to tackle such large scale problems.
More precisely, we obtain theoretical bounds and policies
by means of two methods, depending whether the coupling constraints
are handled by prices or by resources.
We study both centralized and decentralized information structures.
We report the results of numerical experiments on the
management of urban microgrids. It appears that decomposition methods are
much faster and give better results than the standard Stochastic Dual
Dynamic Programming method, both in terms of bounds and of policy performance.
\end{abstract}
\section{Introduction}
Multistage stochastic optimization problems are, by essence,
complex because their solutions are indexed both by stages
(time) and by uncertainties (scenarios).
Another layer of complexity can come from spatial structure.
The large scale nature of such problems makes decomposition methods
appealing (we refer to
\citep{ruszczynski1997decomposition,carpentier2017decomposition}
for a broad description of decomposition methods in stochastic optimization problems).
We sketch decomposition methods along three dimensions:
\emph{temporal decomposition} methods like Dynamic Programming
break the multistage problem into a sequence of interconnected static
subproblems \citep{bellman57,bertsekas1995dynamic};
\emph{scenario decomposition} methods split large scale stochastic
optimization problems scenario by scenario, yielding deterministic
subproblems \citep{rockafellar1991scenarios,watson2011progressive,kim2018algorithmic};
\emph{spatial decomposition} methods break possible spatial
couplings in a global problem to obtain local decoupled subproblems
\cite{cohen80}.
These decomposition schemes have been applied in many fields,
and especially in energy management: Dynamic Programming methods have been
used for example in dam management \citep{shapiro2012final},
and scenario decomposition has been successfully applied to
the resolution of unit-commitment problems \citep{bacaud2001bundle},
among others.
Recent developments have mixed spatial decomposition
methods with Dynamic Programming to solve
large scale multistage stochastic optimization problems.
This work led to the introduction of the Dual Approximate
Dynamic Programming (DADP) algorithm, which was first
applied to unit-commitment problems with a single central
coupling constraint linking different stocks
\citep{barty2010decomposition}, and later applied to dams
management problems~\citep{carpentier2018stochastic}.
This article moves
one step further by considering altogether two types
of decompositions (by prices and by resources)
when dealing with general coupling
constraints among units.
General coupling constraints often arise from flows
conservation on a graph,
and our motivation indeed comes from district microgrid
management, where buildings (units) consume,
produce and store energy and are interconnected through
a network.
The paper is organized as follows.
In Sect.~\ref{chap:nodal}, we introduce a generic stochastic
multistage problem with different subsystems linked together
via a set of static coupling constraints.
We present price and resource decomposition schemes, that make use of
so-called admissible coordination processes. We show how to
bound the global Bellman functions above by a sum of local
resource-decomposed value functions, and below by a sum of
local price-decomposed value functions.
In Sect.~\ref{sec:genericdecomposition}, we study the special
case of deterministic coordination processes. First, we show
that the local price and resource decomposed value functions
satisfy recursive Dynamic Programming equations. Second, we
outline how to improve the bounds. Third, we show how to use the decomposed Bellman
functions to devise admissible policies for the global problem.
Finally, we provide an analysis of the decentralized information
structure, that is, when the controls of a given subsystem only
depend on the past observations of the noise in that same subsystem.
In Sect.~\ref{chap:district:numerics}, we present numerical
results for the optimal management of different microgrids
of increasing size and complexity.
We compare the two decomposition algorithms with
(state of the art) Stochastic Dual Dynamic Programming (SDDP) algorithm.
The analysis of case studies consisting of district
microgrids coupling up to 48 buildings together enlightens
that decomposition methods give better results in terms of cost
performance, and achieve up to a four times speedup in terms of computational
time.
| 1,179 | 28,682 |
en
|
train
|
0.4977.1
|
\section{Upper and Lower Bounds by Spatial Decomposition}
\label{chap:nodal}
We focus in \S\ref{sec:nodal:generic} on a generic decomposable
optimization problem and present price and resource decomposition
schemes. In~\S\ref{sec:nodal:globalpb}, we apply these two
methods to a multistage stochastic optimization problem, by decomposing
a global static coupling constraint by means of so-called price
and resource coordination processes. For such problems,
we define the notions of centralized and decentralized information structures.
\subsection{Bounds for an Optimization Problem under Coupling Constraints
via Decomposition}
\label{sec:nodal:generic}
In~\S\ref{sec:nodal:genericproblem},
we introduce a generic optimization problem with coupled local units.
In~\S\ref{subsec:nodal:bounds}, we show how to bound its optimal value by
decomposition.
\subsubsection{Global Optimization Problem Formulation}
\label{sec:nodal:genericproblem}
Let~$\NODES$ be a finite set, representing local units~\( \node \in \NODES \)
(we use the letter~$\NODES$ as units can be seen as nodes on a graph).
Let~$\sequence{\mathcal{Z}^\node}{\node \in \NODES}$ be a family of sets and
$J^\node: \mathcal{Z}^\node \rightarrow \OpenIntervalClosed{-\infty}{+\infty}$, $\node \in \NODES$,
be local criteria, one for each unit, taking values in the extended reals
\( \OpenIntervalClosed{-\infty}{+\infty} \) ($+\infty$ included to allow for possible constraints).
Let $\sequence{\mathcal{R}^\node}{\node \in \NODES}$, be a family of vector spaces
and $\vartheta^\node: \mathcal{Z}^\node \rightarrow \mathcal{R}^\node$, $\node \in \NODES$,
be mappings that model local constraints.
From these \emph{local} data, we formulate a \emph{global}
minimization problem under constraints. We define the product
set~$\mathcal{Z} = \prod_{\node\in \NODES} \mathcal{Z}^\node $ and the product
space~$\mathcal{R}=\prod_{\node\in \NODES} \mathcal{R}^\node$.
Finally, we introduce a subset $S \subset \mathcal{R}$
that captures the coupling constraints between the $N$~units.
Using the notation \( z=\sequence{z^\node}{\node \in \NODES} \),
we define the \emph{global optimization} problem as
\begin{subequations}
\label{eq:gen:genpb}
\begin{equation}
V{\sharp} = \inf_{z \in \mathcal{Z}} \;
\sum_{\node \in \NODES} J^\node(z^\node) \eqfinv
\end{equation}
under the \emph{global coupling constraint}
\begin{equation}
\label{eq:gen:coupling}
\ba{\vartheta^\node(z^\node)}_{\node \in \NODES} \in -S \eqfinp
\end{equation}
\end{subequations}
The set~$S$ is called the \emph{primal admissible set},
and an element~$\sequence{r^\node}{\node \in \NODES}\in-S$
is called an \emph{admissible resource vector}.
We note that, without Constraint~\eqref{eq:gen:coupling},
Problem~\eqref{eq:gen:genpb} would decompose into $|\NODES|$ independent
subproblems in a straightforward manner.
We moreover assume that, for $\node \in \NODES$, the space~$\mathcal{R}^\node$
(resources) is paired with a space~$\mathcal{P}^\node$ (prices) by bilinear forms
$\bscal{\cdot}{\cdot}\; : \; \mathcal{P}^\node \times \mathcal{R}^\node \to \va{R}R$
(duality pairings).
We define the product space~$\mathcal{P}= \prod_{\node \in \NODES} \mathcal{P}^\node$,
so that~$\mathcal{R}$ and~$\mathcal{P}$ are paired by the duality pairing
$\bscal{p}{r} = \sum_{\node\in \NODES} \bscal{p^\node}{r^\node}$
(see \cite{rockafellar1974conjugate} for further details; a typical
example of paired spaces is a Hilbert space and itself).
\subsubsection{Upper and Lower Bounds from Price and Resource Value Functions}
\label{subsec:nodal:bounds}
Consider the global optimization problem~\eqref{eq:gen:genpb}.
For each~$\node \in \NODES$, we introduce \emph{local price value
functions} $\boldsymbol{U}nderline V^\node : \mathcal{P}^\node \to \ClosedIntervalOpen{-\infty}{+\infty}$
defined by
\begin{equation}
\label{eq:nodal:genericdualvf}
\boldsymbol{U}nderline V^\node\nc{p^\node}
= \inf_{z^\node \in \mathcal{Z}^\node} \; J^\node(z^\node) +
\bscal{p^\node}{\vartheta^\node(z^\node)}
\eqfinv
\end{equation}
where we have supposed that \( V^\node\nc{p^\node} < +\infty \),
and \emph{local resource value functions}
$\overline V^\node: \mathcal{R}^\node \to \OpenIntervalClosed{-\infty}{+\infty}$
defined by
\begin{equation}
\label{eq:nodal:genericprimalvf}
\overline V^\node\nc{r^\node}
= \inf_{z^\node\in \mathcal{Z}^\node} \; J^\node(z^\node)
\quad \text{s.t.}\ \quad \vartheta^\node(z^\node) = r^\node
\eqfinv
\end{equation}
where we have supposed that \( \overline V^\node\nc{r^\node} > -\infty \),
We denote by $S^\text{s.t.}ar \subset \mathcal{P}$ the dual cone
associated with the constraint set~$S$:
\begin{equation}
\label{eq:nodal:dualcone}
S^\text{s.t.}ar =
\ba{p \in \mathcal{P} \; |\;
\bscal{p}{r} \geq 0 \eqsepv
\forall r \in S} \eqfinp
\end{equation}
The cone~$S^\text{s.t.}ar$ is called the \emph{dual admissible set},
and an element~$\sequence{p^\node}{\node \in \NODES}\inS^\text{s.t.}ar$
is called an \emph{admissible price vector}.
We now establish lower and upper bounds for Problem~\eqref{eq:gen:genpb},
and show how they can be computed in a decomposed way, that is, unit by unit.
\begin{proposition}
\label{prop:nodal:valuefunctionsbounds}
For any admissible price vector
$p = \sequence{p^\node}{\node \in \NODES} \in S^\text{s.t.}ar$
and for any admissible resource vector
$r =\sequence{r^\node}{\node \in \NODES} \in -S$,
we have the following lower and upper decomposed estimates
of the global minimum $V{\sharp}$ of Problem~\eqref{eq:gen:genpb}:
\begin{equation}
\label{eq:nodal:generic:bounds}
\sum_{\node \in \NODES} \boldsymbol{U}nderline V^\node\nc{p^\node}
\; \leq \; V{\sharp} \; \leq \;
\sum_{\node \in \NODES} \overline V^\node\nc{r^\node}
\eqfinp
\end{equation}
\end{proposition}
\begin{proof}
Because we have supposed that \( V^\node\nc{p^\node} < +\infty \),
the left hand side of Equation~\eqref{eq:nodal:generic:bounds} belongs
to~$\ClosedIntervalOpen{-\infty}{+\infty}$. In the same way, the right
hand side belongs to~$\OpenIntervalClosed{-\infty}{+\infty}$.
For a given $p = \sequence{p^\node}{\node\in \NODES} \in S^\text{s.t.}ar$,
we have
\begin{align*}
\sum_{\node \in \NODES} \boldsymbol{U}nderline V^\node\nc{p^\node}
&= \sum_{\node \in \NODES} \inf_{z^\node \in \mathcal{Z}^\node} \;
J^\node(z^\node) + \bscal{p^\node}{\vartheta^\node(z^\node)}
\eqfinv \\
&= \inf_{z \in \mathcal{Z}} \; {\sum_{\node \in \NODES} J^\node(z^\node)} +
\bscal{p}{\sequence{\vartheta^\node(z^\node)}{\node \in \NODES}}
\eqfinv
\tag{since \( z=\sequence{z^\node}{\node \in \NODES} \)}
\\
&\le \inf_{z \in \mathcal{Z}} \; {\sum_{\node \in \NODES} J^\node(z^\node)} +
\bscal{p}{\sequence{\vartheta^\node(z^\node)}{\node \in \NODES}}
\eqfinv \\
&\hphantom{\le \inf_{z \in \mathcal{Z}}} \text{s.t.}\
{\sequence{\vartheta^\node(z^\node)}{\node \in \NODES}} \in - S
\tag{minimizing on a smaller set} \\
&\le \inf_{z \in \mathcal{Z}} \; {\sum_{\node \in \NODES} J^\node(z^\node) +0}
\tag{as $p\in S^\text{s.t.}ar$ and by definition~\eqref{eq:nodal:dualcone} of~$S^\text{s.t.}ar$}
\\
&\hphantom{\le \inf_{z \in \mathcal{Z}}} \text{s.t.}\
{\sequence{\vartheta^\node(z^\node)}{\node \in \NODES}} \in - S
\eqfinv
\end{align*}
which gives the lower bound inequality.
The upper bound is easily obtained,
as the optimal value $V{\sharp}$ of Problem~\eqref{eq:gen:genpb} is given by
$\inf_{\tilder \in -S} \sum_{\node \in \NODES} \overline V^\node\nc{\tilder^\node}
\leq \sum_{\node \in \NODES} \overline V^\node\nc{r^\node}$ for any $r \in -S$.
\end{proof}
| 2,699 | 28,682 |
en
|
train
|
0.4977.2
|
\subsection{The Special Case of Multistage Stochastic Optimization Problems
\label{sec:nodal:globalpb}}
Now, we turn to the case where Problem~\eqref{eq:gen:genpb}
corresponds to a multistage stochastic optimization problem
elaborated from local data (local states, local controls, and
local noises), with global coupling constraints at each time step.
We use the notation \( \ic{r,s}=\na{r,r+1,\ldots,s-1,s} \) for two integers
$r \leq s$, and we consider a time span \( \ic{0,T} \) where $T
\in \NN^\text{s.t.}ar$ is a finite horizon.
\subsubsection{Local Data for Local Stochastic Control Problems}
\label{subsec:nodal:generic:localdata}
We detail the \emph{local} data describing
each unit. Let
$\ba{\XX_t^\node}_{t\in\ic{0,T}}$,
$\ba{\UU_t^\node}_{t\in\ic{0,T-1}}$
and $\ba{\WW_t^\node}_{t\in\ic{1,T}}$
be sequences of measurable spaces for each unit $\node \in \NODES$.
We consider two other sequences of measurable vector spaces
$\ba{\mathcal{R}_t^\node}_{t\in\ic{0,T-1}}$ and
$\ba{\mathcal{P}_t^\node}_{t\in\ic{0,T-1}}$
such that for all~$t$, $\mathcal{R}_t^\node$ and
$\mathcal{P}_t^\node$ are paired spaces,
equipped with a bilinear form~$\pscal{\cdot}{\cdot}$.
We also introduce, for all~$\node \in \NODES$ and for
all~$t \in \ic{0, T-1}$,
\begin{itemize}
\item
measurable \emph{local dynamics}
$g_t^\node : \XX_t^\node \times \UU_t^\node \times \WW_{t+1}^\node \to \XX_{t+1}^\node$,
\item
measurable \emph{local coupling functions}
$\Theta_t^\node: \XX_t^\node \times \UU_t^\node \to \mathcal{R}_t^\node$,
\item
measurable \emph{local instantaneous costs}
$L_t^\node: \XX_t^\node \times \UU_t^\node \times \WW_{t+1}^\node \rightarrow
\OpenIntervalClosed{-\infty}{+\infty}$,
\end{itemize}
and a measurable \emph{local final cost}
$K^\node: \XX^\node_T \rightarrow \OpenIntervalClosed{-\infty}{+\infty}$.
We incorporate possible local constraints (for instance constraints
coupling the control with the state) directly in the instantaneous
costs~$L_t^\node$ and the final cost~$K^\node$, since they are extended real
valued functions which can possibly take the value $+\infty$.
From local data given above, we define the global state,
control, noise, resource and price spaces at time~$t$ as
\begin{equation*}
\XX_t = \prod_{\node \in \NODES} \XX_t^\node , \;\:
\UU_t = \prod_{\node \in \NODES} \UU_t^\node , \;\:
\WW_t = \prod_{\node \in \NODES} \WW_t^\node , \;\:
\mathcal{R}_t = \prod_{\node \in \NODES} \mathcal{R}_t^\node , \;\:
\mathcal{P}_t = \prod_{\node \in \NODES} \mathcal{P}_t^\node
\eqfinp
\end{equation*}
We suppose given a \emph{global constraint set}
$S_t \subset \mathcal{R}_t$ \emph{at time~$t$}.
We define the global resource and price spaces~$\mathcal{R}$
and~$\mathcal{P}$, and the global constraint set~$S \subset \mathcal{R}$, as
\begin{equation}
\mathcal{R} = \prod_{t=0}^{T-1} \mathcal{R}_t
\eqsepv
\mathcal{P} = \prod_{t=0}^{T-1} \mathcal{P}_t
\eqsepv
S = \prod_{t=0}^{T-1} S_t
\subset \mathcal{R}
\eqfinv
\label{eq:nodal:global_constraint_set}
\end{equation}
and we denote by $S^\text{s.t.}ar \subset \mathcal{P}$ the dual cone of $S$
(see Equation~\eqref{eq:nodal:dualcone}).
\subsubsection{Centralized and Decentralized Information Structures}
\label{subsec:nodal:generic:globaldata}
We introduce a probability space $(\Omega, \mathcal{F}, \PP)$. For every unit
$\node \in\NODES$, we introduce \emph{local exogenous noise processes}
$\boldsymbol{W}^\node = \na{\boldsymbol{W}_t^\node}_{t\in \ic{1, T}}$, where each
$\boldsymbol{W}_t^\node:\Omega\to\WW_t^\node$ is a random variable.\footnote{Random variables
are denoted using bold letters.} We denote by
\begin{equation}
\label{eq:nodal:generic:globalnoise}
\boldsymbol{W}=(\boldsymbol{W}_1,\cdots,\boldsymbol{W}_T)
\quad\text{ where }\quad
\boldsymbol{W}_t = \sequence{\boldsymbol{W}_t^\node}{\node \in \NODES}
\end{equation}
the \emph{global noise process}.
\begin{subequations}
We consider two \emph{information structures}
\cite[Chap.~3]{carpentier2015stochastic}:
\begin{itemize}
\item
the \emph{centralized} information structure, represented
by the filtration~$\mathcal{F} = \np{\mathcal{F}_t}_{t \in \ic{0,T}}$,
associated with the global noise process~$\boldsymbol{W}$ in~\eqref{eq:nodal:generic:globalnoise},
where
\begin{equation}
\label{eq:nodal:globalinfo}
\mathcal{F}_t = \sigma(\boldsymbol{W}_1, \cdots, \boldsymbol{W}_t)
= \sigma\bp{ \sequence{\boldsymbol{W}_1^\node}{\node \in \NODES}, \cdots,
\sequence{\boldsymbol{W}_t^\node}{\node \in \NODES} }
\end{equation}
is the $\sigma$-field generated by all noises up to time~$t \in \ic{0,T}$,
with the convention~$\mathcal{F}_0 = \{\emptyset,\Omega\}$,
\item
the \emph{decentralized} information structure, represented
by the family \( \sequence{\mathcal{F}^\node}{\node \in \NODES} \)
of filtrations $\mathcal{F}^\node = \np{\mathcal{F}_t^\node}_{t \in \ic{0,T}}$,
where, for any unit $\node \in \NODES $ and any time $t \in \ic{0,T}$,
\begin{equation}
\label{eq:nodal:localinfo}
\mathcal{F}_t^\node = \sigma(\boldsymbol{W}_1^\node, \cdots, \boldsymbol{W}_t^\node)
\subset \mathcal{F}_t = \bigvee_{\node' \in \NODES} \mathcal{F}_t^{\node'}
\eqfinv
\end{equation}
with~$\mathcal{F}_0^\node = \{\emptyset,\Omega\}$.
The \emph{local} $\sigma$-field~$\mathcal{F}_t^\node$ captures
the information provided by the uncertainties up to time~$t$,
\emph{but only in unit~$\node$}.
\end{itemize}
\label{eq:nodal:infos}
\end{subequations}
In the sequel, for a given filtration~$\mathcal{G}$ and a given measurable
space~$\YY$, we denote by ${\mathbb L}^0(\Omega, \mathcal{G}, \PP ; \YY)$ the space
of~\emph{$\mathcal{G}$-adapted processes taking values in the space~$\YY$}.
\subsubsection{Global Stochastic Control Problem}
We denote by $\boldsymbol{X}_t = \sequence{\boldsymbol{X}_t^\node}{\node\in \NODES}$
and $\boldsymbol{U}_t = \sequence{\boldsymbol{U}_t^\node}{\node\in \NODES}$
families of random variables (each of them with values in~$\XX_t^\node$
and in $\UU_t^\node$).
The stochastic processes $\boldsymbol{X} = (\boldsymbol{X}_0,\cdots,\boldsymbol{X}_T)$
and~$\boldsymbol{U}= (\boldsymbol{U}_0,\cdots,\boldsymbol{U}_{T-1}) $ are called
\emph{global state} and \emph{global control} processes.
The stochastic processes $\boldsymbol{X}^\node = (\boldsymbol{X}_0^\node,\cdots,\boldsymbol{X}_T^\node)$
and $\boldsymbol{U}^\node = (\boldsymbol{U}_0^\node,\cdots,\boldsymbol{U}_{T-1}^\node)$ are called
\emph{local state} and \emph{local control processes}.
With the data detailed in \S\ref{subsec:nodal:generic:localdata}
and \S\ref{subsec:nodal:generic:globaldata}, we formulate
a family of optimization problems as follows.
At each time $t \in \ic{0, T}$, the \emph{global value function}
$V_t : \prod_{\node \in \NODES} \XX_{t}^{\node} \rightarrow [-\infty,+\infty]$
is defined, for all
$\sequence{x_t^\node}{\node \in \NODES} \in \prod_{\node \in \NODES}
\XX_{t}^{\node}$, by
(with the convention~$V_T=\sum_{\node \in \NODES}K^\node$)
\begin{subequations}
\label{eq:nodal:vf}
\begin{align}
V_t\bp{\sequence{x_t^\node}{\node \in \NODES}} = \inf_{\boldsymbol{X}, \boldsymbol{U}} \;
& \EE \bgc{\sum_{\node \in \NODES} \sum_{s=t}^{T-1}
L^\node_s(\va X_s^\node, \va U_s^\node, \va W^\node_{s+1}) +
K^\node(\boldsymbol{X}_T^\node)} \eqfinv
\label{eq:nodal:expected_value}
\\
\text{s.t.}
&\; \boldsymbol{X}_t^\node = x_t^\node \text{\, and \,} \va{f}ORALLTIMES{s}{t}{T\!-\!1}
\eqfinv \nonumber
\\
&\boldsymbol{X}_{s+1}^\node = {g}_s^\node(\boldsymbol{X}^\node_s, \boldsymbol{U}_s^\node, \boldsymbol{W}_{s+1}^\node)
\eqsepv \boldsymbol{X}_t^\node = x_t^\node
\eqfinv
\label{eq:nodal:dynamic}
\\
&\sigma(\boldsymbol{U}_s^\node) \subset \mathcal{G}_s^\node
\eqfinv
\label{eq:nodal:measurability}
\\
&
\ba{\Theta_s^\node(\boldsymbol{X}_s^\node, \boldsymbol{U}_s^\node)}_{\node \in \NODES} \in -S_s
\eqfinp
\label{eq:nodal:couplingcons}
\end{align}
\end{subequations}
In the global value function~\eqref{eq:nodal:vf}, the expected
value is taken \boldsymbol{W}rt\ (with respect to) the global uncertainty
process~$(\boldsymbol{W}_{t+1}, \cdots, \boldsymbol{W}_T)$.
We assume that measurability and integrability assumptions hold true,
so that the expected value in~\eqref{eq:nodal:expected_value} is well
defined. Constraints~\eqref{eq:nodal:measurability} --- where
$\sigma(\boldsymbol{U}_s^\node)$ is the $\sigma$-field generated by the random
variable~$\boldsymbol{U}_s^\node$ --- express the fact that each decision
$\boldsymbol{U}_s^\node$ is $\mathcal{G}_s^\node$-measurable, that is, measurable
either \boldsymbol{W}rt\ the global information~$\mathcal{F}_s$ (centralized information
structure) available at time~$s$ (see~Equation~\eqref{eq:nodal:globalinfo})
or \boldsymbol{W}rt\ the local information $\mathcal{F}_s^\node$ (decentralized
information structure) available at time~$s$ for unit~$\node$
(see~Equation~\eqref{eq:nodal:localinfo}), as detailed
in~\S\ref{subsec:nodal:generic:globaldata}.
Finally, Constraints~\eqref{eq:nodal:couplingcons} express
the global coupling constraint at time~$s$ between all units
and have to be understood in the $\PP$-almost sure sense.
We are mostly interested in the \emph{global optimization problem}~$
V_0\np{x_0}$,
where $x_0 = \sequence{x_0^\node}{\node \in \NODES} \in \XX_0$ is the initial
state, that is, Problem~\eqref{eq:nodal:vf} for~$t=0$.
| 3,376 | 28,682 |
en
|
train
|
0.4977.3
|
\subsubsection{Global Stochastic Control Problem}
We denote by $\boldsymbol{X}_t = \sequence{\boldsymbol{X}_t^\node}{\node\in \NODES}$
and $\boldsymbol{U}_t = \sequence{\boldsymbol{U}_t^\node}{\node\in \NODES}$
families of random variables (each of them with values in~$\XX_t^\node$
and in $\UU_t^\node$).
The stochastic processes $\boldsymbol{X} = (\boldsymbol{X}_0,\cdots,\boldsymbol{X}_T)$
and~$\boldsymbol{U}= (\boldsymbol{U}_0,\cdots,\boldsymbol{U}_{T-1}) $ are called
\emph{global state} and \emph{global control} processes.
The stochastic processes $\boldsymbol{X}^\node = (\boldsymbol{X}_0^\node,\cdots,\boldsymbol{X}_T^\node)$
and $\boldsymbol{U}^\node = (\boldsymbol{U}_0^\node,\cdots,\boldsymbol{U}_{T-1}^\node)$ are called
\emph{local state} and \emph{local control processes}.
With the data detailed in \S\ref{subsec:nodal:generic:localdata}
and \S\ref{subsec:nodal:generic:globaldata}, we formulate
a family of optimization problems as follows.
At each time $t \in \ic{0, T}$, the \emph{global value function}
$V_t : \prod_{\node \in \NODES} \XX_{t}^{\node} \rightarrow [-\infty,+\infty]$
is defined, for all
$\sequence{x_t^\node}{\node \in \NODES} \in \prod_{\node \in \NODES}
\XX_{t}^{\node}$, by
(with the convention~$V_T=\sum_{\node \in \NODES}K^\node$)
\begin{subequations}
\label{eq:nodal:vf}
\begin{align}
V_t\bp{\sequence{x_t^\node}{\node \in \NODES}} = \inf_{\boldsymbol{X}, \boldsymbol{U}} \;
& \EE \bgc{\sum_{\node \in \NODES} \sum_{s=t}^{T-1}
L^\node_s(\va X_s^\node, \va U_s^\node, \va W^\node_{s+1}) +
K^\node(\boldsymbol{X}_T^\node)} \eqfinv
\label{eq:nodal:expected_value}
\\
\text{s.t.}
&\; \boldsymbol{X}_t^\node = x_t^\node \text{\, and \,} \va{f}ORALLTIMES{s}{t}{T\!-\!1}
\eqfinv \nonumber
\\
&\boldsymbol{X}_{s+1}^\node = {g}_s^\node(\boldsymbol{X}^\node_s, \boldsymbol{U}_s^\node, \boldsymbol{W}_{s+1}^\node)
\eqsepv \boldsymbol{X}_t^\node = x_t^\node
\eqfinv
\label{eq:nodal:dynamic}
\\
&\sigma(\boldsymbol{U}_s^\node) \subset \mathcal{G}_s^\node
\eqfinv
\label{eq:nodal:measurability}
\\
&
\ba{\Theta_s^\node(\boldsymbol{X}_s^\node, \boldsymbol{U}_s^\node)}_{\node \in \NODES} \in -S_s
\eqfinp
\label{eq:nodal:couplingcons}
\end{align}
\end{subequations}
In the global value function~\eqref{eq:nodal:vf}, the expected
value is taken \boldsymbol{W}rt\ (with respect to) the global uncertainty
process~$(\boldsymbol{W}_{t+1}, \cdots, \boldsymbol{W}_T)$.
We assume that measurability and integrability assumptions hold true,
so that the expected value in~\eqref{eq:nodal:expected_value} is well
defined. Constraints~\eqref{eq:nodal:measurability} --- where
$\sigma(\boldsymbol{U}_s^\node)$ is the $\sigma$-field generated by the random
variable~$\boldsymbol{U}_s^\node$ --- express the fact that each decision
$\boldsymbol{U}_s^\node$ is $\mathcal{G}_s^\node$-measurable, that is, measurable
either \boldsymbol{W}rt\ the global information~$\mathcal{F}_s$ (centralized information
structure) available at time~$s$ (see~Equation~\eqref{eq:nodal:globalinfo})
or \boldsymbol{W}rt\ the local information $\mathcal{F}_s^\node$ (decentralized
information structure) available at time~$s$ for unit~$\node$
(see~Equation~\eqref{eq:nodal:localinfo}), as detailed
in~\S\ref{subsec:nodal:generic:globaldata}.
Finally, Constraints~\eqref{eq:nodal:couplingcons} express
the global coupling constraint at time~$s$ between all units
and have to be understood in the $\PP$-almost sure sense.
We are mostly interested in the \emph{global optimization problem}~$
V_0\np{x_0}$,
where $x_0 = \sequence{x_0^\node}{\node \in \NODES} \in \XX_0$ is the initial
state, that is, Problem~\eqref{eq:nodal:vf} for~$t=0$.
\subsubsection{Local Price and Resource Value Functions}
\label{sec:nodal:localvaluefunctions}
As in \S\ref{subsec:nodal:bounds}, we define
local price and local resource value functions for the
global multistage stochastic optimization problems~\eqref{eq:nodal:vf}.
For this purpose, we introduce a duality pairing between stochastic processes.
For each $\node\in\NODES$, we consider subspaces
\( \boldsymbol{W}idetilde{{\mathbb L}}(\Omega,\mathcal{F},\PP ;\mathcal{R}^\node)
\subset {\mathbb L}^0(\Omega, \mathcal{F}, \PP ; \mathcal{R}^\node) \)
and
\( \boldsymbol{W}idetilde{{\mathbb L}}^{\text{s.t.}ar}(\Omega,\mathcal{F},\PP ;\mathcal{P}^\node)
\subset {\mathbb L}^0(\Omega, \mathcal{F}, \PP ; \mathcal{P}^\node) \) such that
the duality product terms
$\EE\bc{\sum_{t=0}^{T-1}\pscal{\va{p}_t^\node}{\Theta_t^\node(\boldsymbol{X}_t^\node,\boldsymbol{U}_t^\node)}}$
in Equation~\eqref{eq:nodal:priceproblem-t}
are well defined (like in the case of square integrable
random variables, when
$\Theta_t^\node(\boldsymbol{X}_t^\node, \boldsymbol{U}_t^\node) \in {\mathbb L}^2(\Omega,\mathcal{F}_t, \PP ; \va{R}R^d)$
and $\va{p}_t^\node \in {\mathbb L}^2(\Omega,\mathcal{F}_t, \PP ; \va{R}R^d)$).
Let $\node \in \NODES$ be a local unit, and
$\va{p}^\node = (\va{p}_0^\node, \cdots, \va{p}_{T-1}^\node)
\in \boldsymbol{W}idetilde{{\mathbb L}}^{\text{s.t.}ar}(\Omega,\mathcal{F},\PP ;\mathcal{P}^\node)$) be
a \emph{local price process} --- hence, adapted to the global
filtration $\mathcal{F}$ in \eqref{eq:nodal:globalinfo} generated by
the global noises (note that we do not assume that it is adapted
to the local filtration $\mathcal{F}^\node$ in \eqref{eq:nodal:localinfo}
generated by the local noises).
When specialized to the context of Problems~\eqref{eq:nodal:vf},
Equation~\eqref{eq:nodal:genericdualvf} gives,
at each time $t \in \ic{0, T}$,
what we call \emph{local price value functions}
$\boldsymbol{U}nderline V^\node_t\nc{\va{p}^\node} : \XX_{t}^{\node} \rightarrow
\ClosedIntervalOpen{-\infty}{+\infty}$
defined, for all $x_t^\node \in \XX_t^\node$, by
(with the convention~$\boldsymbol{U}nderline V^\node_T\nc{\va{p}^\node}=K^\node$)
\begin{align}
\boldsymbol{U}nderline V^\node_t\nc{\va{p}^\node}(x_t^\node) = \inf_{\boldsymbol{X}^\node, \boldsymbol{U}^\node} \;
& \EE \bigg[\sum_{s=t}^{T-1}
\Big(L^\node_s(\va X_s^\node,\va U_s^\node,\va W^\node_{s+1})
\nonumber \\
& \hspace{1.5cm} + \pscal{\va{p}_s^\node}{\Theta_s^\node(\boldsymbol{X}_s^\node, \boldsymbol{U}_s^\node)}\Big)
+ K^\node(\boldsymbol{X}_T^\node)\bigg] \eqfinv
\label{eq:nodal:priceproblem-t}
\\
\text{s.t.}
& \; \boldsymbol{X}_t^\node = x_t^\node \text{\, and \,} \va{f}ORALLTIMES{s}{t}{T\!-\!1},
\eqref{eq:nodal:dynamic}, \eqref{eq:nodal:measurability}.
\nonumber
\end{align}
We suppose that
\( \boldsymbol{U}nderline V^\node_t\nc{\va{p}^\node}(x_t^\node) < +\infty \)
in~\eqref{eq:nodal:priceproblem-t}.
We define the \emph{global price value function}
$\boldsymbol{U}nderline V_t\nc{\va{p}^\node} : \XX_{t} \rightarrow \ClosedIntervalOpen{-\infty}{+\infty}$
at time~$t\in\ic{0,T}$ as
the sum of the corresponding local price value functions, that is,
using the notation \( x_t=\sequence{x_t^\node}{\node \in \NODES} \),
\begin{equation}
\label{eq:global:priceproblem}
\boldsymbol{U}nderline V_t\nc{\va{p}}(x_t) = \sum_{\node \in \NODES}
\boldsymbol{U}nderline V^\node_t\nc{\va{p}^\node}(x_t^\node)
\eqsepv \forall x_t \in \XX_{t}
\eqfinp
\end{equation}
In the same vein, let
$\va{r}^\node =(\va{r}_0^\node,\cdots,\va{r}_{T-1}^\node)
\in \boldsymbol{W}idetilde{{\mathbb L}}(\Omega,\mathcal{F},\PP ; \mathcal{R}^\node)$
be a \emph{local resource process}.
Equation~\eqref{eq:nodal:genericprimalvf} gives,
at each time $t \in \ic{0, T}$,
what we call \emph{local resource value function},
$\overline V^\node_t\nc{\va{r}^\node} : \XX_{t}^{\node} \rightarrow \OpenIntervalClosed{-\infty}{+\infty}$
defined, for all $x_t^\node \in \XX_t^\node$, by
(with the convention~$\overline{V}^\node_T\nc{\va{r}^\node}=K^\node$)
\begin{subequations}
\label{eq:nodal:quantproblem-t}
\begin{align}
& \overline{V}^\node_t\nc{\va{r}^\node}(x_t^\node) =
\inf_{\boldsymbol{X}^\node, \boldsymbol{U}^\node} \;
\EE \bgc{ \sum_{s=t}^{T-1}
L^\node_s(\va X_s^\node, \va U_s^\node, \va W^\node_{s+1}) +
K^\node(\boldsymbol{X}_T^\node)} \eqfinv \\
\text{s.t.}
& \, \boldsymbol{X}_t^\node = x_t^\node \text{\, and \,} \va{f}ORALLTIMES{s}{t}{T\!-\!1},
\eqref{eq:nodal:dynamic}, \eqref{eq:nodal:measurability}
\text{\, and \,} \Theta_s^\node(\boldsymbol{X}_s^\node, \boldsymbol{U}_s^\node) = \va{r}_s^\node \eqfinp
\end{align}
\end{subequations}
We suppose that
\( \overline{V}^\node_t\nc{\va{r}^\node}(x_t^\node) > -\infty \)
in~\eqref{eq:nodal:quantproblem-t}.
We define the \emph{global resource value function}
\( \overline V_t\nc{\va{r}} : \XX_{t} \rightarrow \OpenIntervalClosed{-\infty}{+\infty} \)
at time $t \in \ic{0, T}$ as the sum
of the local resource value functions, that is,
\begin{equation}
\label{eq:global:quantproblem}
\overline V_t\nc{\va{r}}(x_t) = \sum_{\node \in \NODES}
\overline V^\node_t\nc{\va{r}^\node}(x_t^\node)
\eqsepv \forall x_t \in \XX_{t}
\eqfinp
\end{equation}
We call the global processes
$\va{p} \in \boldsymbol{W}idetilde{{\mathbb L}}^{\text{s.t.}ar}(\Omega, \mathcal{F}, \PP ; \mathcal{P})$
and
$\va{r} \in \boldsymbol{W}idetilde{{\mathbb L}}(\Omega,\mathcal{F},\PP ;\mathcal{R})$
respectively
\emph{price coordination processes}
and
\emph{ressource coordination processes}.
| 3,474 | 28,682 |
en
|
train
|
0.4977.4
|
\subsubsection{Global Upper and Lower Bounds}
\label{subsec:nodal:globalprocess}
Applying Proposition~\ref{prop:nodal:valuefunctionsbounds}
to the local price value functions~\eqref{eq:nodal:priceproblem-t}
and resource value functions~\eqref{eq:nodal:quantproblem-t}
makes it possible to bound the values of the global problems~\eqref{eq:nodal:vf}.
For this purpose, we first define the notion of \emph{admissible}
price and resource coordination processes.
\begin{subequations}
We introduce the primal admissible set~$SSTO$ of stochastic processes
associated with the almost sure constraints~\eqref{eq:nodal:couplingcons}:
\begin{multline}
\label{eq:nodal:primaladmissibleset}
SSTO = \Big\{\va y = (\va y_0,\cdots,\va y_{T-1})
\in \boldsymbol{W}idetilde{{\mathbb L}}(\Omega,\mathcal{F},\PP;\mathcal{R})
\\
\;\; \text{\text{s.t.}} \;\;
\va y_t \in S_t \;\; \PP\text{-}\as \eqsepv
\va{f}ORALLTIMES{t}{0}{T\!-\!1}\Big\} \eqfinp
\end{multline}
Then, the dual admissible cone of~$SSTO$ is
\begin{multline}
\label{eq:nodal:dualadmissibleset}
SSTO^\text{s.t.}ar = \Big\{\va z = (\va z_0, \cdots, \va z_{T-1})
\in \boldsymbol{W}idetilde{{\mathbb L}}^{\text{s.t.}ar}(\Omega,\mathcal{F},\PP;\mathcal{P}) \\
\text{\text{s.t.}} \;\;
\EE \bc{\pscal{\va y_t}{\va z_t}} \geq 0 \eqsepv
\forall \: \va y \in SSTO \eqsepv
\va{f}ORALLTIMES{t}{0}{T\!-\!1}\Big\} \eqfinp
\end{multline}
\label{eq:nodal:admissiblesets}
\end{subequations}
We say that $\va{p} \in \boldsymbol{W}idetilde{{\mathbb L}}^{\text{s.t.}ar}(\Omega,\mathcal{F},\PP;\mathcal{P})$
is an \emph{admissible price coordination process}
if
$\va{p}\inSSTO^\text{s.t.}ar$, and that
$\va{r} \in \boldsymbol{W}idetilde{{\mathbb L}}(\Omega,\mathcal{F},\PP ;\mathcal{R})$
is an \emph{admissible resource coordination process}
if $\va{r} \in -SSTO$.
By considering admissible coordination processes, we will now
bound up and down the global value functions~\eqref{eq:nodal:vf} with
the local value functions~\eqref{eq:nodal:priceproblem-t}
and~\eqref{eq:nodal:quantproblem-t}.
\begin{proposition}
\label{prop:nodal:stochasticvaluefuncbounds}
Let $\va{p}=\sequence{\va{p}^\node}{\node \in \NODES}\inSSTO^\text{s.t.}ar$
be an admissible price coordination process,
and let $\va{r}=\sequence{\va{r}^\node}{\node\in \NODES} \in -SSTO$
be an admissible resource coordination process.
Then, for all~$t\in\ic{0,T}$ and
for all $x_t = \sequence{x_t^\node}{\node \in \NODES} \in \XX_t$,
we have the inequalities
\begin{equation}
\label{eq:nodal:stochasticvaluefuncbounds}
\sum_{\node \in \NODES} \boldsymbol{U}nderline{V}_t^\node\nc{\va{p}^\node}(x_t^\node) \leq
V_t(x_t) \leq
\sum_{\node \in \NODES} \overline{V}_t^\node\nc{\va{r}^\node}(x_t^\node) \eqfinp
\end{equation}
\end{proposition}
\begin{proof}
For~$t=0$, the proof of the following proposition is a direct
application of Proposition~\ref{prop:nodal:valuefunctionsbounds}
to Problem~\eqref{eq:nodal:vf}.
For~$t \in \ic{1,T\!-\!1}$,
from the definitions~\eqref{eq:nodal:admissiblesets}
of~$SSTO$ and~$SSTO^\text{s.t.}ar$, the assumption that
$(\va{r}_0, \cdots, \va{r}_{T-1})$
(resp.~$(\va{p}_0, \cdots, \va{p}_{T-1})$)
is an admissible process implies that
the reduced process $(\va{r}_t, \cdots, \va{r}_{T-1})$
(resp.~$(\va{p}_t, \cdots, \va{p}_{T-1})$) is also admissible
on the reduced time interval~$\ic{t,T-1}$, hence the result
by applying Proposition~\ref{prop:nodal:valuefunctionsbounds}.
\end{proof}
| 1,357 | 28,682 |
en
|
train
|
0.4977.5
|
\section{Decomposition of Local Value Functions by Dynamic Programming}
\label{sec:genericdecomposition}
In~\S\ref{subsec:nodal:globalprocess}, we have obtained
upper and lower bounds of optimization problems by
spatial decomposition. We now give conditions under which
\emph{spatial decomposition} schemes can be made
\emph{compatible with temporal decomposition},
thus yielding a mix of spatial and temporal decompositions.
In~\S\ref{subsec:nodal:decomposedDPdeterministic}, we show
that the local price value functions~\eqref{eq:nodal:priceproblem-t}
and the local resource value functions~\eqref{eq:nodal:quantproblem-t}
can be computed by Dynamic Programming,
when price and resource processes are deterministic.
In~\S\ref{subsec:nodal:processdesign}, we sketch how to obtain
tighter bounds by appropriately choosing the deterministic price
and resource processes.
In~\S\ref{subsec:nodal:admissiblepolicy}, we show
how to use local price and resource value functions as
surrogates for the global Bellman value functions,
and then produce global admissible policies.
In~\S\ref{subsec:nodal:decentralizedinformation}, we analyze
the case of a \emph{decentralized information structure}.
In the sequel, we make the following key assumption.
\begin{assumption}
\label{hyp:independent}
The global uncertainty process $\np{\boldsymbol{W}_1, \cdots, \boldsymbol{W}_T}$
in \eqref{eq:nodal:generic:globalnoise}
consists of stagewise independent random variables.
\end{assumption}
In the case where $\mathcal{G}_t^\node=\mathcal{F}_t$ for all~$t\in\ic{0,T}$
and all~$\node \in \NODES$ (centralized information structure in~\S\ref{subsec:nodal:generic:globaldata}),
under Assumption~\ref{hyp:independent},
the global value functions~\eqref{eq:nodal:vf} satisfy
the Dynamic Programming equations \cite{carpentier2015stochastic}
\begin{subequations}
\label{eq:globaldp}
\begin{align}
V_T(x_T)
& = \sum_{\node \in \NODES} K^{\node}(x^{\node}_T)
\quad \text{ and, for \( t=T\!-\!1, \ldots, 0 \),}
\\
V_t(x_t)
& = \inf_{u_t \in \UU_t} \EE
\bgc{\sum_{\node \in \NODES} L_t^{\node}(x_t^{\node}, u_t^{\node}, \boldsymbol{W}^{\node}_{t+1}) +
V_{t+1}\bp{\sequence{\boldsymbol{X}_{t+1}^\node}{\node \in \NODES}}}
\\
& \hphantom{u_t \in \UU_t} \text{s.t.}\ \;
\boldsymbol{X}_{t+1}^{\node} = {g}_t^{\node}(x_t^{\node}, u_t^{\node}, \boldsymbol{W}^{\node}_{t+1}) \eqfinv \\
& \hphantom{u_t \in \UU_t \text{s.t.}\ } \;
\ba{\Theta_t^\node(x_t^\node, u_t^\node)}_{\node \in \NODES} \in -S_t \eqfinp
\end{align}
\end{subequations}
In the case where $\mathcal{G}_t^{\node}=\mathcal{F}_t^{\node}$ for all~$t\in\ic{0,T}$
and all~$\node \in \NODES$ (decentralized information structure
in~\S\ref{subsec:nodal:generic:globaldata}), the common assumptions
under which the global value functions~\eqref{eq:nodal:vf} satisfy
Dynamic Programming equations are not met.
\subsection{Decomposed Value Functions by Deterministic Coordination Processes}
\label{subsec:nodal:decomposedDPdeterministic}
We prove now that, for deterministic coordination processes,
the local problems~\eqref{eq:nodal:priceproblem-t} and
\eqref{eq:nodal:quantproblem-t} satisfy local Dynamic Programming
equations.
We first study the local price value function~\eqref{eq:nodal:priceproblem-t}.
\begin{proposition}
\label{prop:nodal:dppriceconstant}
Let $p^{\node}= (p_0^{\node}, \cdots, p_{T-1}^{\node})
\in \mathcal{P}^{\node}$ be a deterministic price process.
Then, be it for the centralized or the decentralized information
structure (see \S\ref{subsec:nodal:generic:globaldata}),
the local price value functions~\eqref{eq:nodal:priceproblem-t}
satisfy the following recursive Dynamic Programming equations
\begin{subequations}
\label{eq:localdp}
\begin{align}
\boldsymbol{U}nderline{V}_T^{\node}\nc{p^{\node}}(x_T^{\node}) =
& \; K^{\node}(x_T^{\node})
\quad \text{and, for \( t=T\!-\!1, \ldots, 0 \),}
\\
\boldsymbol{U}nderline{V}_t^{\node}\nc{p^{\node}}(x_t^{\node}) =
& \inf_{u_t^\node \in \UU_t^{\node}} \;
\EE \Big[ L_t^{\node}(x_t^{\node}, u_t^{\node}, \boldsymbol{W}_{t+1}^{\node}) +
\pscal{p_t^{\node}}{\Theta_t^{\node}(x_t^{\node}, u_t^{\node})}
\\
& \hspace{3.5cm} + \boldsymbol{U}nderline{V}_{t+1}^{\node}\nc{p^{\node}}
\bp{{g}_t^{\node}(x_t^{\node}, u_t^{\node},
\boldsymbol{W}_{t+1}^{\node})} \Big]
\eqfinp
\nonumber
\end{align}
\end{subequations}
\end{proposition}
\begin{proof}
Let $p^{\node} = (p_0^{\node},\cdots,p_{T-1}^{\node})\in \mathcal{P}^{\node}$
be a deterministic price vector. Then, the price value
function~\eqref{eq:nodal:priceproblem-t} has the following expression:
\begin{align}
\boldsymbol{U}nderline V^{\node}_0\nc{p^{\node}}(x_0^{\node}) = \inf_{\boldsymbol{X}^{\node}, \boldsymbol{U}^{\node}}
& \EE \bigg[\sum_{t=0}^{T-1}
L^{\node}_t(\va X_t^{\node}, \va U_t^{\node}, \va W^{\node}_{t+1}) \nonumber
\\
& \hspace{2.0cm} + \pscal{p_t^{\node}}{\Theta_t^{\node}(\boldsymbol{X}_t^{\node},\boldsymbol{U}_t^{\node})}
+ K^{\node}(\boldsymbol{X}_T^{\node}) \bigg]
\eqfinv
\label{eq:nodal:priceproblem-deter}
\\
\text{s.t.}
& \;\boldsymbol{X}_0^{\node} = x_0^{\node}\text{\, and \,}
\va{f}ORALLTIMES{s}{0}{T\!-\!1},
\eqref{eq:nodal:dynamic}, \eqref{eq:nodal:measurability} \nonumber
\eqfinp
\end{align}
In the case where~$\mathcal{G}_t^{\node}=\mathcal{F}_t$, and as
Assumption~\ref{hyp:independent} holds true, the optimal value
of Problem~\eqref{eq:nodal:priceproblem-deter} can be obtained
by the recursive Dynamic Programming equations~\eqref{eq:localdp}.
Consider now the case~$\mathcal{G}_t^{\node}=\mathcal{F}_t^{\node}$. Since the local
value function and local dynamics in~\eqref{eq:nodal:priceproblem-deter}
only depend on the local noise process~$\boldsymbol{W}^{\node}$, there is no loss of
optimality to replace the constraint $\sigma(\boldsymbol{U}_t^{\node}) \subset \mathcal{F}_t$
by $\sigma(\boldsymbol{U}_t^{\node}) \subset \mathcal{F}_t^{\node}$.
Moreover, Assumption~\ref{hyp:independent} implies that the local
uncertainty process $(\boldsymbol{W}_{1}^{\node},\dots,\boldsymbol{W}_{T}^{\node})$ consists
of stagewise independent random variables, so that the solution of
Problem~\eqref{eq:nodal:priceproblem-deter} can be obtained
by the recursive Dynamic Programming equations~\eqref{eq:localdp}
when replacing the \emph{global} $\sigma$-field~$\mathcal{F}_t$
by the \emph{local} $\sigma$-field~$\mathcal{F}_t^{\node}$
(see Equation~\eqref{eq:nodal:infos}).
\end{proof}
A similar result holds true for the local resource value
functions~\eqref{eq:nodal:quantproblem-t} as stated now in Proposition~\ref{prop:nodal:dpquantconstant}
whose proof is left to the reader.
\begin{proposition}
\label{prop:nodal:dpquantconstant}
Let $r^{\node}= (r_0^{\node}, \cdots, r_{T-1}^{\node})
\in \mathcal{R}^{\node}$ be a deterministic resource process.
Then, be it for the centralized or the decentralized information structure
in~\S\ref{subsec:nodal:generic:globaldata},
the local resource value functions~\eqref{eq:nodal:quantproblem-t}
satisfy the following recursive Dynamic Programming equations
\begin{subequations}
\begin{align}
\overline{V}_T^{\node}\nc{r^{\node}}(x_T^{\node}) =
& K^{\node}(x_T^{\node})
\quad \text{and, for \( t=T\!-\!1, \ldots, 0 \),}
\\
\overline{V}_t^{\node}\nc{r^{\node}}(x_t^{\node}) =
& \inf_{u_t^\node \in \UU_t^{\node}} \EE \Bc{L_t^{\node}(x_t^{\node}, u_t^{\node}, \boldsymbol{W}_{t+1}^{\node})
+ \overline{V}_{t+1}^{\node}\nc{r^{\node}}
\bp{{g}_t^{\node}(x_t^{\node}, u_t^{\node}, \boldsymbol{W}_{t+1}^{\node})}}
\eqfinv
\nonumber
\\
& \text{s.t.}\ \;\; \Theta_t^{\node}(x_t^{\node}, u_t^{\node}) = r_t^{\node}
\eqfinp
\end{align}
\label{eq:nodal:localdpquant}
\end{subequations}
\end{proposition}
| 2,824 | 28,682 |
en
|
train
|
0.4977.6
|
\subsection{Computing Upper and Lower Bounds, and Decomposed Value Functions}
\label{subsec:nodal:processdesign}
In the context of a deterministic \emph{admissible} price coordination process
$p^{\node} = (p_0^{\node},\cdots,p_{T-1}^{\node})\in S^\text{s.t.}ar$
and resource process
$r^{\node}= (r_0^{\node}, \cdots, r_{T-1}^{\node}) \in S$,
where~$S$ is defined in~\eqref{eq:nodal:global_constraint_set}, the double inequality
\eqref{eq:nodal:stochasticvaluefuncbounds}
in Proposition~\ref{prop:nodal:stochasticvaluefuncbounds} becomes
\begin{equation}
\label{eq:nodal:boundsvaluefunctiondeterministic}
\sum_{\node \in \NODES} \boldsymbol{U}nderline{V}_t^{\node}\nc{p^{\node}}(x_t^{\node})
\leq V_t(x_t)\leq
\sum_{\node \in \NODES} \overline{V}_t^{\node}\nc{r^{\node}}(x_t^{\node})
\eqfinp
\end{equation}
\begin{itemize}
\item
Both in the lower bound and the upper bound of~$V_t$
in~\eqref{eq:nodal:boundsvaluefunctiondeterministic},
the sum over units~$\node\in\NODES$ materializes the spatial decomposition
for the computation of the bounds. For each of the bounds, this
decomposition leads to independent optimization subproblems
that can be processed in parallel.
\item
For a given unit~$\node\in\NODES$,
the computation of the local value functions~$\boldsymbol{U}nderline{V}_t^{\node}\nc{p^{\node}}$
and~$\overline{V}_t^{\node}\nc{r^{\node}}$ for $t \in \ic{0,T}$
can be performed by Dynamic Programming
as stated in Propositions
\ref{prop:nodal:dppriceconstant} and~\ref{prop:nodal:dpquantconstant}.
The corresponding loop in backward time materializes the temporal
decomposition, processed sequentially.
\end{itemize}
Now, we suppose given an initial state
$x_0 = \sequence{x_0^\node}{\node \in \NODES} \in \XX_0$
and we sketch how, by suitably choosing the admissible coordination processes,
we can improve
the upper and lower bounds~\eqref{eq:nodal:boundsvaluefunctiondeterministic}
for~$V_0\np{x_0}$, that is, the optimal value of Problem~\eqref{eq:nodal:vf} for~$t=0$.
By Propositions \ref{prop:nodal:stochasticvaluefuncbounds}
and~\ref{prop:nodal:dppriceconstant}, for any deterministic
$p=(p_0,\cdots,p_{T-1}) \in S^\text{s.t.}ar$,
we have
\( \sum_{\node \in \NODES} \boldsymbol{U}nderline{V}_0^{\node}\nc{p^{\node}}(x_0^{\node})
\; \leq \; V_0(x_0) \).
As a consequence, solving the following optimization problem
\begin{equation}
\sup_{p \in S^\text{s.t.}ar} \sum_{\node \in \NODES}
\boldsymbol{U}nderline V^{\node}_0\nc{p^{\node}}(x_0^{\node})
\label{eq:nodal:relaxedconstraintdual}
\end{equation}
gives the greatest possible lower bound in the class
of deterministic admissible price coordination processes.
We can maximize Problem~\eqref{eq:nodal:relaxedconstraintdual}
\boldsymbol{W}rt~$p$ using a gradient-like ascent algorithm.
Updating $p$ requires the computation of the gradient of
$\sum_{\node \in \NODES} \boldsymbol{U}nderline{V}_0^{\node}\nc{p^{\node}}(x_0^{\node})$,
obtained when computing the price value functions.
The standard update formula corresponding to the gradient algorithm
(Uzawa algorithm) can be replaced by more sophisticated methods (Quasi-Newton).
By Propositions \ref{prop:nodal:stochasticvaluefuncbounds}
and~\ref{prop:nodal:dpquantconstant}, for any
deterministic $r=(r_0,\cdots,r_{T-1}) \in -S$,
we have
\( V_0\bp{\sequence{x_0^\node}{\node \in \NODES}} \; \leq \;
\sum_{\node \in \NODES} \overline{V}_0^{\node}\nc{r^{\node}}(x_0^{\node})
\).
As a consequence, solving the following optimization problem
\begin{equation}
\label{eq:nodal:overconstraint}
\inf_{r\in -S} \sum_{\node \in \NODES}
\overline{V}_0^{\node}\nc{r^{\node}}(x_0^{\node})
\end{equation}
gives the lowest possible upper bound in the set
of deterministic admissible resource coordination processes.
Again, we can minimize Problem~\eqref{eq:nodal:overconstraint}
\boldsymbol{W}rt~$r$ using a gradient-like algorithm.
Updating $r$ requires the computation of the gradient of
$\sum_{\node \in \NODES} \overline{V}_0^{\node}\nc{r^{\node}}(x_0^{\node})$, obtained
when computing the resource value functions.
Again, the standard update formula corresponding to the gradient
algorithm can be replaced by more sophisticated methods.
At the end of the procedure, we have obtained
a deterministic admissible price coordination process
$p=(p_0,\cdots,p_{T-1}) \in S^\text{s.t.}ar$
and a deterministic admissible resource coordination process
$r=(r_0,\cdots,r_{T-1}) \in -S$
such that~$V_0\np{x_0}$,
the optimal value of Problem~\eqref{eq:nodal:vf} for~$t=0$,
is tightly bounded above and below like
in~\eqref{eq:nodal:boundsvaluefunctiondeterministic}
for~$t=0$. We have also obtained the solutions
$\na{\boldsymbol{U}nderline{V}_t^{\node}\nc{p}}_{t\in \ic{0, T}}$
and
$\na{\overline{V}_t^{\node}\nc{r}}_{t\in \ic{0, T}}$
of the recursive Dynamic Programming Equations~\eqref{eq:localdp} and~\eqref{eq:nodal:localdpquant}
associated with these coordination processes.
| 1,646 | 28,682 |
en
|
train
|
0.4977.7
|
\subsection{Devising Policies}
\label{subsec:nodal:admissiblepolicy}
Now that we have decomposed value functions,
we show how to devise policies.
By \emph{policy}, we mean a sequence
\( \gamma = \ba{\gamma_{t}}_{t\in \ic{0, T-1}} \) where,
for any \( t\in\ic{0,T{-}1} \),
each $\gamma_t$ is a \emph{state feedback}, that is,
a measurable mapping \( \gamma_t : \XX_t\to\UU_t \).
Here, we suppose that we have at our disposal pre-computed \emph{local}
value functions $\na{\boldsymbol{U}nderline V_t^{\node}}_{t\in \ic{0, T}}$
and $\na{\overline V_t^{\node}}_{t\in \ic{0, T}}$ solving
Equations~\eqref{eq:localdp} for the price value functions
and Equations~\eqref{eq:nodal:localdpquant} for the resource
value functions.
For instance, one could use the functions
$\na{\boldsymbol{U}nderline{V}_t^{\node}\nc{p}}_{t\in \ic{0, T}}$
and
$\na{\overline{V}_t^{\node}\nc{r}}_{t\in \ic{0, T}}$
obtained at the end of~\S\ref{subsec:nodal:processdesign}.
Using the sum of these local value functions
as a surrogate for a global Bellman value function,
we propose two \emph{global} policies as follows
(supposing that the $\argmin$ are not empty and that the resulting expressions
provide measurable mappings \cite{bertsekas-shreve:1996}):
\noindent
1) a \emph{global price policy}
\( \boldsymbol{U}nderline \gamma =
\ba{\boldsymbol{U}nderline\gamma_{t}}_{t\in \ic{0, T-1}} \)
with, for any \( t\in\ic{0,T{-}1} \), the feedback
$\boldsymbol{U}nderline \gamma_t:\XX_t\to\UU_t$ defined
for all $x_t = \sequence{x_t^\node}{\node \in \NODES} \in \XX_t$ by
\begin{align}
\boldsymbol{U}nderline\gamma_t(x_t) \in \argmin_{\sequence{u_t^\node}{\node \in \NODES}}
& \; \EE\bgc{\sum_{\node \in \NODES} L_t^{\node}(x_t^{\node}, u_t^{\node}, \boldsymbol{W}_{t+1}^{\node}) +
\boldsymbol{U}nderline V_{t+1}^{\node}\bp{g_t^{\node}(x_t^{\node}, u_t^{\node},
\boldsymbol{W}_{t+1}^{\node})}} \eqsepv
\nonumber \\
\text{s.t.}\
& \; \ba{\Theta_t^\node(x_t^\node, u_t^\node)}_{\node \in \NODES}
\in -S_t
\eqfinv
\label{eq:nodal:globalpricepolicy}
\end{align}
2) a \emph{global resource policy}
\( \overline \gamma =
\ba{\overline\gamma_{t}}_{t\in \ic{0, T-1}} \)
with, for any \( t \in \ic{0,T{-}1} \), the feedback
$\overline \gamma_t: \XX_t \to \UU_t$ defined
for all $x_t = \sequence{x_t^\node}{\node \in \NODES} \in \XX_t$ by
\begin{align}
\overline\gamma_t(x_t) \in \argmin_{\sequence{u_t^\node}{\node \in \NODES}}
& \; \EE\bgc{\sum_{\node \in \NODES} L_t^{\node}(x_t^{\node}, u_t^{\node}, \boldsymbol{W}_{t+1}^{\node}) +
\overline V_{t+1}^{\node}\bp{g_t^{\node}(x_t^{\node}, u_t^{\node},
\boldsymbol{W}_{t+1}^{\node})}}
\eqfinv
\nonumber \\
\text{s.t.}\
& \; \ba{\Theta_t^\node(x_t^\node, u_t^\node)}_{\node \in \NODES}
\in -S_t
\eqfinp
\label{eq:nodal:globalresourcepolicy}
\end{align}
Given a policy \( \gamma = \ba{\gamma_{t}}_{t\in \ic{0, T-1}} \)
and any time $t \in \ic{0, T}$, the expected cost of policy
$\gamma$ starting from state~$x_t$ at time~$t$ is equal to
\begin{align}
V_t^\gamma(x_t) =
& \; \EE\bgc{\sum_{\node \in \NODES} \sum_{s=t}^{T-1}
L_s^{\node}(\boldsymbol{X}_s^{\node},\gamma_s^{\node}(\boldsymbol{X}_s),\boldsymbol{W}_{t+1}^{\node}) +
K^{\node}(\boldsymbol{X}_T^{\node})}
\eqfinv
\label{eq:nodal:costpolicy}
\\
\text{s.t.}\
& \va{f}ORALLTIMES{s}{t}{T\!-\!1} \eqsepv
\boldsymbol{X}_{s+1}^{\node} = {g}_s^{\node}(\boldsymbol{X}^{\node}_s, \gamma_s^{\node}(\boldsymbol{X}_s), \boldsymbol{W}_{s+1}^{\node})
\eqsepv \boldsymbol{X}_t^{\node} = x_t^{\node}
\eqfinp
\nonumber
\end{align}
We provide several bounds hereafter.
\begin{proposition}
\label{prop:nodal:boundresourcepolicy}
Let $t \in \ic{0, T}$ and $x_t = \sequence{x_t^\node}{\node \in \NODES} \in \XX_t$
be a given state. Then, we have
\begin{subequations}
\begin{align}
\sum_{\node \in \NODES} \boldsymbol{U}nderline{V}_t^{\node}(x_t^{\node}) \leq V_t(x_t)
& \leq
V_t^{\overline \gamma}(x_t)
\leq \sum_{\node \in \NODES} \overline{V}_t^{\node}(x_t^{\node})
\eqfinv
\label{eq:nodal:boundresourcepolicy_a}
\\
V_t(x_t)
& \leq \inf
\ba{V_t^{\boldsymbol{U}nderline \gamma}(x_t),V_t^{\overline \gamma}(x_t)}
\eqfinp
\end{align}
\label{eq:nodal:boundresourcepolicy}
\end{subequations}
\end{proposition}
\begin{proof}
We prove the right hand side inequality
in~\eqref{eq:nodal:boundresourcepolicy_a} by backward induction.
At time $t = T$, the result is straightforward as
$\overline{V}_t^{\node} = K^{\node}$ for all $\node \in \NODES$.
Let $t \in \ic{0, T-1}$ such that the right hand side inequality
in~\eqref{eq:nodal:boundresourcepolicy_a} holds true at time $t+1$.
Then, for all $x_t \in \XX_t$, Equation \eqref{eq:nodal:costpolicy}
can be rewritten
\begin{equation*}
V_t^{\overline \gamma}(x_t) =
\EE\bgc{\sum_{\node \in \NODES}\bp{ L_t^{\node}(x_t^{\node}, \overline \gamma_t^{\node}(x_t),
\boldsymbol{W}_{t+1}^{\node})} + V_{t+1}^{\overline \gamma}(\boldsymbol{X}_{t+1}) } \eqfinv
\end{equation*}
Using the induction assumption, we deduce that
\begin{align*}
V_t^{\overline \gamma}(x_t)
&\leq
\EE\bgc{\sum_{\node \in \NODES} L_t^{\node}(x_t^{\node}, \overline \gamma_t^{\node}(x_t),
\boldsymbol{W}_{t+1}^{\node}) + \overline{V}_{t+1}^{\node}(\boldsymbol{X}_{t+1}^{\node}) } \eqfinp\\
\intertext{From the very definition~\eqref{eq:nodal:globalresourcepolicy}
of the global resource policy, $\overline{\gamma}$ , we obtain}
V_t^{\overline \gamma}(x_t)
& \leq \inf_{\sequence{u_t^\node}{\node \in \NODES}} \;
\EE \bgc{\sum_{\node \in \NODES} L_t^{\node}(x_t^{\node}, u_t^{\node}, \boldsymbol{W}_{t+1}^{\node}) +
\overline{V}_{t+1}^{\node}(\boldsymbol{X}_{t+1}^{\node})} \eqfinv \\
& \hspace{1.0cm} \text{s.t.}\ \;
\ba{\Theta_t^\node(x_t^\node, u_t^\node)}_{\node \in \NODES} \in -S_t \eqfinp
\end{align*}
Introducing a deterministic admissible resource
process $\sequence{r_t^\node}{\node \in \NODES} \in -S_t$
and restraining the constraint with it reinforces
the inequality, thus giving
\begin{subequations}
\label{eq:nodal:proof:temp1}
\begin{align}
V_t^{\overline \gamma}(x_t)
& \leq \inf_{{\sequence{u_t^\node}{\node \in \NODES}}} \;
\EE \bgc{\sum_{\node \in \NODES} L_t^{\node}(x_t^{\node}, u_t^{\node}, \boldsymbol{W}_{t+1}^{\node}) +
\overline{V}_{t+1}^{\node}(\boldsymbol{X}_{t+1}^{\node}) } \\
& \hspace{1.0cm} \text{s.t.}\ \;
\Theta_t^\node(x_t^\node, u_t^\node) = r_t^\node \eqsepv \forall {\node \in \NODES}
\eqfinv
\end{align}
\end{subequations}
so that
\begin{equation*}
V_t^{\overline \gamma}(x_t) \leq \sum_{\node \in \NODES}
\Bp{\inf_{u_t^{\node}}
\EE\bc{L_t^{\node}(x_t^{\node}, u_t^{\node}, \boldsymbol{W}_{t+1}^{\node}) +
\overline{V}_{t+1}^{\node}(\boldsymbol{X}_{t+1}^{\node})}
\;\; \text{s.t.} \;\; \Theta_t^{\node}(x_t^{\node},u_t^{\node}) = r_t^{\node}}
\end{equation*}
as we do not have any coupling left in \eqref{eq:nodal:proof:temp1}.
By Equation~\eqref{eq:nodal:localdpquant}, we deduce that
\( V_t^{\overline \gamma}(x_t)\leq \sum_{\node \in \NODES}
\overline{V}_t^{\node}(x_t^{\node}) \),
hence the result at time~$t$.
Furthermore, for any admissible policy $\gamma$,
we have $V_t(x_t) \leq V_t^{\gamma}(x_t)$ as the global Bellman
function gives the minimal cost starting at any point $x_t \in \XX_t$.
We therefore obtain all the other inequalities
in~\eqref{eq:nodal:boundresourcepolicy}.
\end{proof}
| 3,020 | 28,682 |
en
|
train
|
0.4977.8
|
\subsection{Analysis of the Decentralized Information Structure}
\label{subsec:nodal:decentralizedinformation}
An interesting consequence of Propositions
\ref{prop:nodal:dppriceconstant} and~\ref{prop:nodal:dpquantconstant}
is that the local price and resource value functions
$\boldsymbol{U}nderline{V}_t^{\node}\nc{p^{\node}}$ in~\eqref{eq:global:priceproblem}
and~$\overline{V}_t^{\node}\nc{r^{\node}}$
in~\eqref{eq:global:quantproblem}
remain the same when choosing either the centralized information
structure or the decentralized one in~\S\ref{subsec:nodal:generic:globaldata}.
By contrast, the global value functions~$V_t$ in~\eqref{eq:nodal:vf}
depend on that choice. Let us denote by~$V^{\mathrm{C}}_t$
(resp. $V^{\mathrm{D}}_t$) the value functions~\eqref{eq:nodal:vf}
in the centralized (resp. decentralized) case
where \( \sigma(\boldsymbol{U}_s^\node) \subset \mathcal{F}_s^\node \)
(resp. \( \sigma(\boldsymbol{U}_s^\node) \subset \mathcal{F}_s \)). Since the admissible
set induced by the constraint~\eqref{eq:nodal:measurability}
in the centralized case is larger than the one in the decentralized
case (because $\mathcal{F}_t^{\node} \subset \mathcal{F}_t$ by \eqref{eq:nodal:localinfo}),
we deduce that the lower bound is tighter for the centralized problem,
and the upper bound tighter for the decentralized problem:
for all $x_t = \sequence{x_t^\node}{\node \in \NODES} \in \XX_t$,
\begin{equation}
\label{eq:boundsCandD}
\sum_{\node \in \NODES} \boldsymbol{U}nderline{V}_t^{\node}\nc{p^{\node}}(x_t^{\node})
\leq V^{\mathrm{C}}_t(x_t)
\leq V^{\mathrm{D}}_t(x_t)
\leq \sum_{\node \in \NODES} \overline{V}_t^{\node}\nc{r^{\node}}(x_t^{\node})
\eqfinp
\end{equation}
Now, we show that, in some specific cases (but often encountered in practical
applications), the best upper bound
in~\eqref{eq:boundsCandD} is equal to the optimal value~$V^{\mathrm{D}}_t(x_t)$ of the decentralized problem.
\begin{proposition}
\label{prop:upperequalD}
If, for all~$t \in \ic{0,T-1}$, we have the equivalence
\begin{equation}
\label{eq:upperequalD-ass}
\begin{split}
\ba{\Theta_t^\node(\boldsymbol{X}_t^\node, \boldsymbol{U}_t^\node)}_{\node \in \NODES}
\in -S_t
\iff \\
\bp{\exists \sequence{r_t^\node}{\node \in \NODES}\in {-}S_t \eqsepv
\Theta_t^{\node}(\boldsymbol{X}_t^{\node}, \boldsymbol{U}_t^{\node})=r_t^{\node}
\quad \forall \node \in \NODES}
\eqfinv
\end{split}
\end{equation}
then the optimal value
$V^{\mathrm{D}}_0(x_0)$ of the decentralized problem ---
that is,
given by~\eqref{eq:nodal:vf} where \( \sigma(\boldsymbol{U}_s^\node) \subset \mathcal{F}_s^\node \)
in~\eqref{eq:nodal:measurability} --- satisfies
\begin{equation}
\label{eq:upperequalD-prop}
V_0^{\mathrm{D}}(x_0) =
\inf_{r \in -S} \;
\sum_{\node \in \NODES} \overline{V}_0^{\node}\nc{r^{\node}}(x_0^{\node}) \eqfinp
\end{equation}
\end{proposition}
\begin{proof}
Using Assumption~\eqref{eq:upperequalD-ass},
Problem~\eqref{eq:nodal:vf} for~$t=0$
can be written as
\begin{align*}
V_0^{\mathrm{D}}(x_0) =
& \inf_{r\in -S}
\Bgp{\sum_{\node \in \NODES} \inf_{\boldsymbol{X}^{\node}, \boldsymbol{U}^{\node}} \;
\EE\bgc{ \sum_{t=0}^{T-1}
L^{\node}_t(\va X_t^{\node}, \va U_t^{\node}, \va W^{\node}_{t+1}) +
K^{\node}(\boldsymbol{X}_T^{\node})}} \eqfinv \\
& \text{s.t.}\ \: \boldsymbol{X}_0^{\node} = x_0^{\node} \text{\, and \,}
\va{f}ORALLTIMES{s}{0}{T\!-\!1},
\eqref{eq:nodal:dynamic}, \eqref{eq:nodal:measurability},
\Theta_s^{\node}(\boldsymbol{X}_s^{\node}, \boldsymbol{U}_s^{\node}) = r_s^{\node}
\eqfinv
\nonumber\\
=
& \inf_{r\in -S} \;
\sum_{\node \in \NODES} \overline{V}_0^{\node}\nc{r^{\node}}(x_0^{\node}) \eqfinv
\end{align*}
the last equality arising from the definition
of~$\overline{V}_0^{\node}\nc{r^{\node}}$ in~\eqref{eq:nodal:quantproblem-t}
for $t=0$.
\end{proof}
As an application of the previous Proposition~\ref{prop:upperequalD},
we consider the case
of a decentralized information structure with an additional
\emph{independence assumption in space} (whereas
Assumption~\ref{hyp:independent} is an independence assumption \emph{in time}).
\begin{corollary}
We consider the case of a decentralized information structure
with the following two additional assumptions:
\begin{itemize}
\item
the random processes $\boldsymbol{W}^\node$, for $\node \in \NODES$, are independent,
\item
the coupling constraints~\eqref{eq:nodal:couplingcons}
are of the form
\( \sum_{\node \in \NODES}\Theta_t^{\node}(\boldsymbol{X}_t^{\node}, \boldsymbol{U}_t^{\node}) = 0 \).
\end{itemize}
Then, the assumptions of Proposition~\ref{prop:upperequalD}
are satisfied, so that Equality~\eqref{eq:upperequalD-prop} holds true.
\label{cor:upperequalD}
\end{corollary}
\begin{proof}
From the dynamic constraint~\eqref{eq:nodal:dynamic} and from
the measurability constraint~\eqref{eq:nodal:measurability},
we have that each term~$\Theta_t^{\node}(\boldsymbol{X}_t^{\node}, \boldsymbol{U}_t^{\node})$ is
$\mathcal{F}_t^{\node}$-measurable in the decentralized information structure case.
Since the random processes $\boldsymbol{W}^\node$, for $\node \in \NODES$, are independent,
so are the $\sigma$-fields~$\mathcal{F}_t^{\node}$, for $\node \in \NODES$, from which we
deduce that the random variables
$\Theta_t^{\node}(\boldsymbol{X}_t^{\node}, \boldsymbol{U}_t^{\node})$ are independent.
Now, these random variables sum up to zero.
But it is well-known that, if a sum of independent random variables
is zero, then every random variable in the sum is constant (deterministic).
Hence, each random variable $\Theta_t^{\node}(\boldsymbol{X}_t^{\node}, \boldsymbol{U}_t^{\node})$ is constant.
By introducing their constant values~$\sequence{r_t^\node}{\node \in \NODES}$,
the constraints~\eqref{eq:nodal:couplingcons} are written equivalently
$\Theta_t^{\node}(\boldsymbol{X}_t^{\node},\boldsymbol{U}_t^{\node}) - r_t^{\node} = 0$,
$\forall \: \node \in\NODES$,
and $\sum_{\node\in\NODES} r_t^{\node} = 0$.
We conclude with Proposition~\ref{prop:upperequalD}.
\end{proof}
\begin{remark}
\label{rem:decentralizedpolicy}
In the case of a decentralized information structure~\eqref{eq:nodal:localinfo},
it seems difficult to produce Bellman-based online policies.
Indeed, neither the global price policy in~\eqref{eq:nodal:globalpricepolicy}
nor the global resource policy in~\eqref{eq:nodal:globalresourcepolicy}
are implementable since both policies require the knowledge of the global
state $\sequence{x_t^\node}{\node \in \NODES}$ for each unit~$\node$, which is incompatible
with the information constraint~\eqref{eq:nodal:localinfo}.
Nevertheless, one can use the results given by resource decomposition
to compute a local state feedback as follows.
For a given deterministic admissible resource
process~$r \in -S$, solving at time~$t$ and for each~$\node \in\NODES$
the subproblem
\begin{align*}
\overline\gamma_t^{\node}(x_t^{\node}) \in \argmin_{u_t^{\node}}
& \; \EE\Bc{ L_t^{\node}(x_t^{\node}, u_t^{\node}, \boldsymbol{W}_{t+1}^{\node}) +
\overline V_{t+1}^{\node}\bp{ g_t^{\node}(x_t^{\node}, u_t^{\node}, \boldsymbol{W}_{t+1}^{\node}) }} \eqfinv \\
\text{s.t.}\
& \Theta_t^{\node}(x_t^{\node}, u_t^{\node}) = r_t^{\node}
\end{align*}
generates a local state feedback
\( \overline\gamma_t^{\node} : \XX_t^{\node} \to\UU_t \) which is both
compatible with the decentralized information
structure~\eqref{eq:nodal:localinfo}
and such that the policy \( \overline \gamma =
\ba{\overline\gamma_{t}}_{t\in \ic{0, T-1}} \) is admissible
as it satisfies the global coupling
constraint~\eqref{eq:nodal:couplingcons}
between all units because $r \in -S$,
where $S$ is defined in~\eqref{eq:nodal:global_constraint_set}.
By contrast, replicating this procedure with a deterministic admissible
price process would produce a policy which would not satisfy the global
coupling constraint~\eqref{eq:nodal:couplingcons}.
\end{remark}
| 2,790 | 28,682 |
en
|
train
|
0.4977.9
|
\section{Application to Microgrids Optimal Management}
\label{chap:district:numerics}
We illustrate the effectiveness of the two decomposition schemes
introduced in Sect.~\ref{sec:genericdecomposition}
by presenting numerical results.
In~\S\ref{Description_of_the_problems}, we describe an application
in the optimal management of urban microgrids.
In~\S\ref{ssec:nodalalgorithms}, we detail how we implement
algorithms to obtain bounds and policies.
In~\S\ref{Numerical_results}, we
illustrate the performance of the decomposition methods
with numerical results.
\subsection{Description of the Problems}
\label{Description_of_the_problems}
The energy management problem
and the structure of the microgrids
come from case studies
provided by the urban Energy Transition Institute
Efficacity\footnote{Established in 2014 with the French government support,
Efficacity aims to develop and implement innovative
solutions to build and manage energy-efficient
cities.}.
For more details on microgrid modeling and on the formulation
of associated optimization problems, the reader is referred
to the PhD thesis~\cite{thesepacaud}.
We represent a district microgrid by a directed graph
$(\mathfrak{N}, \mathfrak{A})$, with $\mathfrak{N}$ the set of nodes and $\mathfrak{A}$
the set of arcs. Each node of the graph corresponds to a building.
The buildings exchange energy through the edges of the graph,
hence coupling the different nodes of the graph by static
constraints (Kirchhoff law).
We manage the microgrids over a given day
in summer, with decisions taken every 15mn, so that $T = 96$.
Each building has its own electrical and domestic hot water demand profiles,
and possibly its own solar panel production.
At node~$\node$, we consider a random variable~$\va\boldsymbol{W}_t^\node$,
with values in $\WW_t^\node=\va{R}R^2$, representing the following couple
of uncertainties:
the local electricity demand minus the production of the solar panel;
the domestic hot water demand.
We also suppose given a corresponding finite probability distribution on the set~$\WW_t^\node$.
Each building is equipped with an electrical hot water tank;
some buildings have solar panels and some others have batteries.
We view batteries and electrical hot water tanks as energy stocks
so that, depending on the presence of battery inside the building,
we introduce a state~$\boldsymbol{X}_t^\node$ at node~$\node$ with dimension~2 or~1
(energy stored inside the water tank and energy stored in the battery),
and the same with the control $\boldsymbol{U}_t^\node$ at node~$\node$
(power used to heat the tank and power exchanged with the battery).
Each node of a graph is modelled as a local control system
in which the cost function corresponds to import electricity from the external
grid. Summing the costs and taking the expectation
(supposing that the $\np{\boldsymbol{W}_1, \cdots, \boldsymbol{W}_T}$
are stagewise independent random variables),
we obtain a global optimization problem of the form~\eqref{eq:nodal:vf}.
We consider six different problems with growing sizes.
Table~\ref{tab:numeric:pbsize} displays the different dimensions
considered.
\begin{table}[!ht]
\centering
{\normalsize
\begin{tabular}{|c|ccccc|}
\hline
Problem & $\card{\mathfrak{N}}$ & $\card{\mathfrak{A}}$ & $\mathbf{dim(\XX_t)}$ & $dim(\WW_t)$ & $supp(\boldsymbol{W}_t)$ \\
\hline
\hline
\textrm{3-nodes} & 3 & 3 & \textbf{4} & 6 & $10^3$ \\
\textrm{6-nodes} & 6 & 7 & \textbf{8} & 12 & $10^6$ \\
\textrm{12-nodes} & 12 & 16 & \textbf{16} & 24 & $10^{12}$ \\
\textrm{24-nodes} & 24 & 33 & \textbf{32} & 48 & $10^{24}$ \\
\textrm{48-nodes} & 48 & 69 & \textbf{64} & 96 & $10^{48}$ \\
\hline
\end{tabular}
}
\caption{Microgrid management problems with growing dimensions}
\label{tab:numeric:pbsize}
\end{table}
As an example, the 12-nodes problem consists of twelve buildings;
four buildings are equipped with a battery, and four other
buildings are equipped with solar panels.
The devices are dispatched so that a building equipped with a solar
panel is connected to at least one building with a battery.
\subsection{Computing Bounds, Decomposed Value Functions and Devising Policies}
\label{ssec:nodalalgorithms}
We apply the two decomposition algorithms, introduced
in~\S\ref{subsec:nodal:processdesign} and
in~\S\ref{subsec:nodal:admissiblepolicy},
to each problem as described in Table~\ref{tab:numeric:pbsize}.
We will term \emph{Dual Approximate Dynamic Programming}
(DADP) the price decomposition algorithm
and \emph{Primal Approximate Dynamic Programming} (PADP)
the resource decomposition algorithm described in
\S\ref{subsec:nodal:processdesign} and
in~\S\ref{subsec:nodal:admissiblepolicy}.
We compare DADP and PADP
with the well-known Stochastic Dual Dynamic Programming (SDDP)
algorithm (see~\cite{girardeau2014convergence} and references
inside) applied to the global problem.
In this part, we suppose given an initial state
$x_0 = \sequence{x_0^\node}{\node \in \NODES} \in \XX_0$.
Regarding the SDDP algorithm, it is not implementable in
a straightforward manner since the cardinality of the global noise
support becomes huge with the number~$\card{\mathfrak{N}}$ of nodes
(see Table~\ref{tab:numeric:pbsize}), so that the exact computation
of an expectation \boldsymbol{W}rt\ the global uncertainty
$\boldsymbol{W}_t= \sequence{\boldsymbol{W}_t^\node}{\node\in \mathfrak{N}}$ is out of reach.
To overcome this issue, we have resampled the probability distribution
of the global noise~$\sequence{\boldsymbol{W}_t^\node}{\node\in \mathfrak{N}}$ at
each time~$t$ by using the $k$-means clustering method
(see \cite{rujeerapaiboon2018scenario}).
Thanks to the convexity properties of the problem, the optimal quantization
yields a new optimization problem
whose optimal value is a lower bound for the optimal value
of the original problem (see \cite{lohndorfmodeling} for details).
Thus, the exact lower bound given by SDDP with resampling remains
a lower bound for the exact lower bound given by SDDP without resampling,
which itself is, by construction, a lower bound for the original problem.
Regarding DADP and PADP, we use a quasi-Newton algorithm
to perform the maximization \boldsymbol{W}rt\ $p$ in~\eqref{eq:nodal:relaxedconstraintdual}
and the minimization \boldsymbol{W}rt\ $r$ in~\eqref{eq:nodal:overconstraint}.
More precisely, the quasi-Newton algorithm is performed using Ipopt 3.12
(see~\citep{wachter2006implementation}). The algorithm stops either
when a stopping criterion is fulfilled or when no descent direction
is found.
Each algorithm (SDDP, DADP, PADP) returns a sequence
of global value functions indexed by time.
Indeed, SDDP produces approximate global value functions,
and, for DADP (resp. PADP), we sum the local price value functions
(resp. the local resource value functions) obtained as
solutions of the recursive Dynamic Programming
equations~\eqref{eq:localdp} (resp.~\eqref{eq:nodal:localdpquant}),
for the deterministic admissible price coordination process
$p=(p_0,\cdots,p_{T-1}) \in S^\text{s.t.}ar$
(resp. the deterministic admissible resource coordination process
$r=(r_0,\cdots,r_{T-1}) \in -S$)
obtained at the end of~\S\ref{subsec:nodal:processdesign}
for an initial state
$x_0 = \sequence{x_0^\node}{\node \in \NODES} \in \XX_0$.
As explained in~\S\ref{subsec:nodal:admissiblepolicy},
these global value functions yield policies.
Thus, we have three policies (SDDP, DADP, PADP)
that we can compare.
As the policies are admissible,
the three expected values of the associated costs are
\emph{upper bounds} of the optimal value of the global optimization problem.
| 2,249 | 28,682 |
en
|
train
|
0.4977.10
|
\subsection{Computing Bounds, Decomposed Value Functions and Devising Policies}
\label{ssec:nodalalgorithms}
We apply the two decomposition algorithms, introduced
in~\S\ref{subsec:nodal:processdesign} and
in~\S\ref{subsec:nodal:admissiblepolicy},
to each problem as described in Table~\ref{tab:numeric:pbsize}.
We will term \emph{Dual Approximate Dynamic Programming}
(DADP) the price decomposition algorithm
and \emph{Primal Approximate Dynamic Programming} (PADP)
the resource decomposition algorithm described in
\S\ref{subsec:nodal:processdesign} and
in~\S\ref{subsec:nodal:admissiblepolicy}.
We compare DADP and PADP
with the well-known Stochastic Dual Dynamic Programming (SDDP)
algorithm (see~\cite{girardeau2014convergence} and references
inside) applied to the global problem.
In this part, we suppose given an initial state
$x_0 = \sequence{x_0^\node}{\node \in \NODES} \in \XX_0$.
Regarding the SDDP algorithm, it is not implementable in
a straightforward manner since the cardinality of the global noise
support becomes huge with the number~$\card{\mathfrak{N}}$ of nodes
(see Table~\ref{tab:numeric:pbsize}), so that the exact computation
of an expectation \boldsymbol{W}rt\ the global uncertainty
$\boldsymbol{W}_t= \sequence{\boldsymbol{W}_t^\node}{\node\in \mathfrak{N}}$ is out of reach.
To overcome this issue, we have resampled the probability distribution
of the global noise~$\sequence{\boldsymbol{W}_t^\node}{\node\in \mathfrak{N}}$ at
each time~$t$ by using the $k$-means clustering method
(see \cite{rujeerapaiboon2018scenario}).
Thanks to the convexity properties of the problem, the optimal quantization
yields a new optimization problem
whose optimal value is a lower bound for the optimal value
of the original problem (see \cite{lohndorfmodeling} for details).
Thus, the exact lower bound given by SDDP with resampling remains
a lower bound for the exact lower bound given by SDDP without resampling,
which itself is, by construction, a lower bound for the original problem.
Regarding DADP and PADP, we use a quasi-Newton algorithm
to perform the maximization \boldsymbol{W}rt\ $p$ in~\eqref{eq:nodal:relaxedconstraintdual}
and the minimization \boldsymbol{W}rt\ $r$ in~\eqref{eq:nodal:overconstraint}.
More precisely, the quasi-Newton algorithm is performed using Ipopt 3.12
(see~\citep{wachter2006implementation}). The algorithm stops either
when a stopping criterion is fulfilled or when no descent direction
is found.
Each algorithm (SDDP, DADP, PADP) returns a sequence
of global value functions indexed by time.
Indeed, SDDP produces approximate global value functions,
and, for DADP (resp. PADP), we sum the local price value functions
(resp. the local resource value functions) obtained as
solutions of the recursive Dynamic Programming
equations~\eqref{eq:localdp} (resp.~\eqref{eq:nodal:localdpquant}),
for the deterministic admissible price coordination process
$p=(p_0,\cdots,p_{T-1}) \in S^\text{s.t.}ar$
(resp. the deterministic admissible resource coordination process
$r=(r_0,\cdots,r_{T-1}) \in -S$)
obtained at the end of~\S\ref{subsec:nodal:processdesign}
for an initial state
$x_0 = \sequence{x_0^\node}{\node \in \NODES} \in \XX_0$.
As explained in~\S\ref{subsec:nodal:admissiblepolicy},
these global value functions yield policies.
Thus, we have three policies (SDDP, DADP, PADP)
that we can compare.
As the policies are admissible,
the three expected values of the associated costs are
\emph{upper bounds} of the optimal value of the global optimization problem.
\subsection{Numerical Results}
\label{Numerical_results}
We compare the three algorithms (SDDP, DADP, PADP)
regarding their execution time
in~\S\ref{Computation_of_the_Bellman_value_functions},
the quality of their theoretical bounds
in~\S\ref{Quality_of_the_theoretical_bounds},
and the performance of their policies in simulation
in~\S\ref{Policy_simulation_results}.
\subsubsection{CPU Execution Time}
\label{Computation_of_the_Bellman_value_functions}
Table~\ref{tab:district:numeric:optres} details CPU execution time
and number of iterations before reaching stopping criterion
for the three algorithms.
\begin{table}[!ht]
\centering
{\normalsize
\begin{tabular}{|l|ccccc|}
\hline
Problem & \textrm{3-nodes} \hspace{-0.2cm}
& \textrm{6-nodes} \hspace{-0.2cm}
& \textrm{12-nodes} \hspace{-0.2cm}
& \textrm{24-nodes} \hspace{-0.2cm}
& \textrm{48-nodes} \hspace{-0.2cm} \\
\hline
dim($\XX_t$) & 4 & 8 & 16
& 32 & 64 \\
\hline
\hline
SDDP CPU time & 1' & 3' & 10'
& 79' & 453' \\
SDDP iterations & 30 & 100 & 180
& 500 & 1500 \\
\hline
\hline
DADP CPU time & 6' & 14' & 29'
& 41' & 128' \\
DADP iterations & 27 & 34 & 30
& 19 & 29 \\
\hline
\hline
PADP CPU time & 3' & 7' & 22'
& 49' & 91' \\
PADP iterations & 11 & 12 & 20
& 19 & 20 \\
\hline
\end{tabular}
}
\caption{Comparison of CPU time and number of iterations for SDDP, DADP and PADP}
\label{tab:district:numeric:optres}
\end{table}
For a small-scale problem like \textrm{3-nodes} (second column
of Table~\ref{tab:district:numeric:optres}), SDDP is faster
than DADP and PADP. However, for the 48-nodes problem (last
column of Table~\ref{tab:district:numeric:optres}),
\emph{DADP and PADP} are \emph{more than three times faster}
than SDDP.
Figure~\ref{fig:nodal:cputime} depicts how much CPU
time take the different algorithms with respect to the state dimension.
For this case study, we observe
that the \emph{CPU time grows almost linearly} \boldsymbol{W}rt\ the dimension of the state
for DADP and PADP, whereas it grows exponentially for SDDP.
Otherwise stated, decomposition methods scale better than SDDP
in terms of CPU time for large microgrids instances.
\begin{figure}
\caption{CPU time for the three algorithms as a function
of the state dimension}
\label{fig:nodal:cputime}
\end{figure}
\subsubsection{Quality of the Theoretical Bounds}
\label{Quality_of_the_theoretical_bounds}
In Table~\ref{tab:district:numeric:upperlower},
we give the lower and upper bounds (of the optimal
cost~$V_0(x_0)$ of the global optimization problem)
achieved by the three algorithms (SDDP, DADP, PADP).
We recall that SDDP returns a lower bound of the optimal
cost~$V_0(x_0)$, both by nature and also because we used
a suitable resampling of the global uncertainty distribution
instead of the original distribution itself (see the discussion
in~\S\ref{ssec:nodalalgorithms}).
DADP and PADP lower and upper bounds are given by
Equation~\eqref{eq:nodal:relaxedconstraintdual}
and Equation~\eqref{eq:nodal:overconstraint} respectively.
In Table~\ref{tab:district:numeric:upperlower}, we observe that
\begin{itemize}
\item SDDP's and DADP's lower bounds are close to each other,
\item for problems with more than 12 nodes, DADP's lower
bound is up to 2.6\% better than SDDP's lower bound,
\item the gap between PADP's upper bound and
the two lower bounds is rather large.
\end{itemize}
\begin{table}[!ht]
\centering
{\normalsize
\begin{tabular}{|l|ccccc|}
\hline
Problem & \textrm{3-nodes} & \textrm{6-nodes} & \textrm{12-nodes} & \textrm{24-nodes} & \textrm{48-nodes} \\
\hline
\hline
SDDP LB & 225.2 & 455.9 & 889.7 & 1752.8 & 3310.3 \\
\hline
DADP LB & 213.7 & 447.3 & 896.7 & 1787.0 & 3396.4 \\
\hline
PADP UB & 252.1 & 528.5 & 1052.3 & 2100.7 & 4016.6 \\
\hline
\end{tabular}
}
\caption{Upper and lower bounds (of the optimal
cost~$V_0(x_0)$ of the global optimization problem) given by SDDP, DADP and PADP}
\label{tab:district:numeric:upperlower}
\end{table}
To sum up, DADP achieves a slightly better lower bound than SDDP,
with much less CPU time (and a parallel version of DADP would
give even better performance in terms of CPU time).
\subsubsection{Policy Simulation Performances}
\label{Policy_simulation_results}
In Table~\ref{tab:district:numeric:simulation},
we give the performances of the policies yielded by
the three algorithms.
The SDDP, DADP and PADP values are obtained by Monte Carlo simulation of the
corresponding policies on $5,000$ scenarios. The notation
$\pm$ corresponds to the 95\% confidence interval for the
numerical evaluation of the expected costs. We use
the value obtained by the SDDP policy as a reference,
a positive gap meaning that the corresponding policy
makes better than the SDDP policy.
All these values are \emph{statistical} upper bounds of the optimal
cost~$V_0(x_0)$ of the global optimization problem.
\begin{table}[H]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{|l|ccccc|}
\hline
Network & \textrm{3-nodes} & \textrm{6-nodes} & \textrm{12-nodes} &
\textrm{24-nodes} & \textrm{48-nodes} \\
\hline
\hline
SDDP value & 226 $\pm$ 0.6 & 471 $\pm$ 0.8 & 936 $\pm$ 1.1 & 1859 $\pm$ 1.6 & 3550 $\pm$ 2.3 \\
\hline
\hline
DADP value & 228 $\pm$ 0.6 & 464 $\pm$ 0.8 & 923 $\pm$ 1.2 & 1839 $\pm$ 1.6 & 3490 $\pm$ 2.3 \\
Gap & - 0.8 \% & + 1.5 \% & +1.4\% & +1.1\% & +1.7\% \\
\hline
\hline
PADP value & 229 $\pm$ 0.6 & 471 $\pm$ 0.8 & 931 $\pm$ 1.1 & 1856 $\pm$ 1.6 & 3508 $\pm$ 2.2 \\
Gap & -1.3\% & 0.0\% & +0.5\% & +0.2\% & +1.2\% \\
\hline
\end{tabular}
}
\caption{Simulation costs (Monte Carlo) for policies induced by
SDDP, DADP and PADP}
\label{tab:district:numeric:simulation}
\end{table}
We make the following observations:
\begin{itemize}
\item
for problems with more than 6 nodes,
both the DADP policy and the PADP policy beat
the SDDP policy,
\item
the DADP policy gives better results than the PADP policy,
\item
comparing with the last line of Table
\ref{tab:district:numeric:upperlower}, the statistical
upper bounds
are much closer to SDDP and DADP lower bounds than PADP's
exact upper bound.
\end{itemize}
For this last observation, our interpretation is as follows:
the PADP algorithm is penalized because, as the resource coordination
process is deterministic, it imposes constant
importation flows for every possible realization of
the uncertainties (see also the interpretation of PADP
in the case of a decentralized information structure
in~\S\ref{subsec:nodal:decentralizedinformation}).
| 3,628 | 28,682 |
en
|
train
|
0.4977.11
|
\section{Conclusions}
We have considered multistage stochastic optimization problems
involving multiple units coupled by spatial static constraints.
We have presented a formalism for joint
temporal and spatial decomposition.
We have provided two fully parallelizable algorithms
that yield theoretical bounds, value functions
and admissible policies.
We have stressed the key role played by information structures in the
performance of the decomposition schemes.
We have tested these algorithms on the management of
several district microgrids. Numerical results have showed the effectiveness
of the approach: the price decomposition algorithm beats
the reference SDDP algorithm for large-scale problems with
more than 12~nodes, both in terms of theoretical bounds and
policy performance, and in terms of computation time. On problems
with up to 48~nodes (corresponding to 64~state variables), we have
observed that their performance scales well as the dimension of the state
grew: SDDP is affected by the well-known curse of dimensionality,
whereas decomposition-based methods are not.
Possible extensions are the following.
In~\S\ref{subsec:nodal:processdesign} and
in~\S\ref{subsec:nodal:admissiblepolicy},
we have presented a serial version of the decomposition algorithms,
but we believe that leveraging their parallel nature could decrease
further their computation time.
In~\S\ref{subsec:nodal:decomposedDPdeterministic},
we have only considered deterministic price and resource
coordination processes. Using larger search sets
for the coordination variables, e.g. considering
Markovian coordination processes, would make it
possible to improve the performance of the algorithms
(see \cite[Chap.~7]{thesepacaud} for further details).
However, one would need to analyze how to obtain a good
trade-off between accuracy and numerical performance.
\end{document}
| 440 | 28,682 |
en
|
train
|
0.4978.0
|
\begin{document}
\title{Binomial transforms of the modified $k$-Fibonacci-like sequence}
\date{}
\author{Youngwoo Kwon\\
Department of mathematics, Korea University, Seoul, Republic of Korea\\
\href{mailto:[email protected]}{\tt [email protected]}\\
}
\maketitle
\begin{abstract}
This study applies the binomial, $k$-binomial, rising $k$-binomial and falling $k$-binomial transforms to the modified $k$-Fibonacci-like sequence. Also, the Binet formulas and generating functions of the above mentioned four transforms are newly found by the recurrence relations.
\end{abstract}
\section{Introduction}
The Fibonacci sequence $\left(F_{n}\right)_{n\geq0}$ is defined by the recurrence relation
\begin{align*}
F_{n+1}=&F_{n}+F_{n-1} \text{~for~} n\ge1
\end{align*}
with the initial conditions $F_{0}=0$ and $F_{1}=1$.
Many authors have studied the Fibonacci sequence, some of whom introduced new sequences related to it as well as proving many identities for them.
In particular, Falc$\acute{\rm{o}}$n and Plaza \cite{FP02} introduced the $k$-Fibonacci sequence.
\begin{definition}[\cite{FP02}]
For any positive real number $k$, the $k$-Fibonacci sequence $\left( F_{k,n} \right)_{n\geq0}$ is defined by recurrence relation
$$F_{k,n+1} = k F_{k,n} + F_{k,n-1} ~\text{for}~n\ge1$$
with the initial conditions $F_{k,0} =0$ and $F_{k,1} =1$.
\end{definition}
Also, Kwon \cite{YK} introduced the modified $k$-Fibonacci-like sequence.
\begin{definition}[\cite{YK}]
For any positive real number $k$, the modified $k$-Fibonacci-like sequence $\left(M_{k,n}\right)_{n\geq0}$ is defined by the recurrence relation
$$M_{k,n+1} = k M_{k,n} + M_{k,n-1} ~\text{for}~n\ge1 $$
with the initial conditions $M_{k,0} = M_{k,1} = 2$.
\end{definition}
The first few modified $k$-Fibonacci-like numbers are as follows:
\begin{align*}
M_{k,2}=& 2k+2,\\
M_{k,3}=& 2k^{2}+2k+2,\\
M_{k,4}=& 2k^{3}+2k^{2}+4k+2,\\
M_{k,5}=& 2k^{4}+2k^{3} +6k^{2}+4k+2.
\end{align*}
Kwon \cite{YK} studied the following identities between the $k$-Fibonacci sequence and the modified $k$-Fibonacci-like sequence.
$$M_{k,n} = 2 \left( F_{k,n} + F_{k,n-1}\right) \text{ and } F_{k,n} = \frac{1}{2}\sum_{i=0}^{n-1} M_{k,n-i}(-1)^{i}$$
Spivey and Steil \cite{SS} introduced various binomial transforms.
\begin{enumerate}
\item[(1)] The binomial transform $B$ of the integer sequence $A=\left\{a_{0}, a_1 , a_2 , \ldots \right\}$, which is denoted by $B(A)=\left\{b_{n}\right\}$ and defined by
$$b_{n} = \sum_{i=0}^{n}\binom{n}{i} a_{i}.$$
\item[(2)] The $k$-binomial transform $W$ of the integer sequence $A=\left\{a_{0}, a_1 , a_2 , \ldots \right\}$, which is denoted by $W(A)=\left\{w_{n}\right\}$ and defined by
$$w_{n} = \sum_{i=0}^{n}\binom{n}{i} k^{n} a_{i}.$$
\item[(3)] The rising $k$-binomial transform $R$ of the integer sequence $A=\left\{a_{0}, a_1 , a_2 , \ldots \right\}$, which is denoted by $B(A)=\left\{r_{n}\right\}$ and defined by
$$r_{n} = \sum_{i=0}^{n}\binom{n}{i} k^{i} a_{i} .$$
\item[(4)] The falling $k$-binomial transform $F$ of the integer sequence $A=\left\{a_{0}, a_1 , a_2 , \ldots \right\}$, which is denoted by $F(A)=\left\{f_{n}\right\}$ and defined by
$$f_{n} = \sum_{i=0}^{n}\binom{n}{i} k^{n-i} a_{i} .$$
\end{enumerate}
Other latest research \cite{BJS, FP03, YT02} also examined the various binomial transforms for several special sequences. These transforms are interesting and meaningful as they introduced several new approaches.
Based on those preceding studies, this study applies the four binomial transforms namely, binomial, $k$-binomial, rising $k$-binomial and falling $k$-binomial transforms to the modified $k$-Fibonacci-like sequence. This study also proves their properties.
\section{The binomial transform of the modified \texorpdfstring{$k$}{Lg}-Fibonacci-like sequence}
The binomial transform of the modified $k$-Fibonacci-like sequence $\left(M_{k,n}\right)_{n\geq0}$ is denoted by $B_{k}=\left(b_{k,n}\right)_{n \geq0}$ where
$$b_{k,n} = \sum_{i=0}^{n}\binom{n}{i}M_{k,i}.$$
The only binomial transforms of the modified $k$-Fibonacci-like sequences indexed in OEIS \cite{S} are as follows:
\begin{align*}
B_{1} =&\left\{2, 4, 10, 26, 68, 178, \ldots \right\} : A052995-\{0\} \text{ or } A055819-\{1\}\\
B_{2} =&\left\{2, 4, 12, 40, 136, 464, \ldots \right\} : A056236\\
B_{3} =&\left\{2, 4, 14, 58, 248, 1066, \ldots \right\} \\
B_{4} =&\left\{2, 4, 16, 80, 416, 2176, \ldots \right\} \\
B_{5} =&\left\{2, 4, 18, 106, 652, 4034, \ldots \right\}
\end{align*}
\begin{lemma}\label{binomial_T}
The binomial transform of the modified $k$-Fibonacci-like sequence satisfies the relation
$$b_{k,n+1} - b_{k,n} = \sum_{i=0}^{n}\binom{n}{i} M_{k,i+1}.$$
\end{lemma}
\begin{proof}
Note that $\binom{n}{0}=1$ and $\binom{n+1}{i} = \binom{n}{i}+\binom{n}{i-1}$.
The difference of the two consecutive binomial transforms is the following:
\begin{align*}
b_{k,n+1} -b_{k,n}&= \sum_{i=0}^{n+1} \binom{n+1}{i} M_{k,i}-\sum_{i=0}^{n}\binom{n}{i}M_{k,i}\\
&=\sum_{i=1}^{n}\left[\binom{n+1}{i}-\binom{n}{i} \right]M_{k,i} + M_{k,n+1} \\
&=\sum_{i=1}^{n}\binom{n}{i-1}M_{k,i}+M_{k,n+1}\\
&=\sum_{i=0}^{n-1}\binom{n}{i}M_{k,i+1} + \binom{n}{n}M_{k,n+1}=\sum_{i=0}^{n}\binom{n}{i}M_{k,i+1}
\end{align*}
\end{proof}
Note that $b_{k,n+1} = \sum_{i=0}^{n}\binom{n}{i}\left(M_{k,i}+M_{k,i+1}\right)$.
\begin{theorem}\label{binomial_Ta}
The binomial transform of the modified $k$-Fibonacci-like sequence $B_{k}=\left(b_{k,n}\right)_{n\geq0}$ satisfies the recurrence relation
$$b_{k,n+1} = (k+2)b_{k,n} - k b_{k,n-1} ~\text{for}~n\ge1$$
with the initial conditions $b_{k,0}=2$, $b_{k,1} = 4$.
\end{theorem}
\begin{proof}
By Lemma \ref{binomial_T}, since $b_{k,n+1} = \sum_{i=0}^{n}\binom{n}{i}\left(M_{k,i}+M_{k,i+1}\right)$, then we have
\begin{align*}
b_{k,n+1}=&M_{k,0} +M_{k,1} + \sum_{i=1}^{n}\binom{n}{i}\left(M_{k,i} + M_{k,i+1}\right)\\
=&M_{k,0}+M_{k,1}+\sum_{i=1}^{n}\binom{n}{i}\left(M_{k,i}+kM_{k,i}+M_{k,i-1}\right)\\
=&\left[(k+1)M_{k,0}+(k+1)\sum_{i=1}^{n}\binom{n}{i}M_{k,i}\right]\\
&+\sum_{i=1}^{n}\binom{n}{i}M_{k,i-1} + M_{k,1}-kM_{k,0}\\
=&(k+1)\sum_{i=0}^{n}\binom{n}{i}M_{k,i}+\sum_{i=1}^{n}\binom{n}{i}M_{k,i-1}+M_{k,1}-kM_{k,0}\\
=&(k+1)b_{k,n}+\sum_{i=1}^{n}\binom{n}{i}M_{k,i-1} +2-2k.
\end{align*}
On the other hand, in the case of $\binom{n-1}{n}=0$, we can obtain the following:
\begin{align*}
b_{k,n} &=kb_{k,n-1} + \sum_{i=1}^{n}\binom{n}{i}M_{k,i-1}+2-2k.
\end{align*}
Based on the above two identities, this study draws the below formulas.
$$b_{k,n+1} - (k+1)b_{k,n} = b_{k,n} - k b_{k,n-1},$$
and so
$$b_{k,n+1} = (k+2)b_{k,n} - k b_{k,n-1}.$$
\end{proof}
Binet's formulas are well known in the Fibonacci number theory. In this study, Binet's formula for the binomial transform of the modified $k$-Fibonacci-like sequence is suggested as the following:
\begin{theorem}\label{binet_T}
Binet's formula for the binomial transform of the modified $k$-Fibonacci-like sequence is given by
$$b_{k,n}= 4\frac{r_{1}^{n}-r_{2}^{n}}{r_{1}-r_{2}}- 2k\frac{r_{1}^{n-1}-r_{2}^{n-1}}{r_{1}-r_{2}},$$
where $r_{1}$ and $r_{2}$ are the roots of the characteristic equation $x^{2}-(k+2)x+k=0$, and $r_{1}>r_{2}$.
\end{theorem}
\begin{proof}
The characteristic polynomial equation of $b_{k,n+1} = (k+2)b_{k,n} - k b_{k,n-1}$ is $x^{2} - (k+2)x + k =0$, whose solution are $r_{1}$ and $r_{2}$ with $r_{1}>r_{2}$. The general term of the binomial transform may be expressed in the form, $b_{k,n}=C_{1}r_{1}^{n} + C_{2}r_{2}^{n}$ for some coefficients $C_{1}$ and $C_{2}$.
\begin{enumerate}
\item[(1)] $b_{k,0} = C_{1} + C_{2} = 2$
\item[(2)] $b_{k,1}=C_{1}r_{1} + C_{2}r_{2} = 4$
\end{enumerate}
Then
$$C_{1} = \frac{4-2r_{2}}{r_{1}-r_{2}} \text{ and } C_{2}=\frac{2r_{1} -4}{r_{1}-r_{2}}.$$
Therefore,
$$b_{k,n} = \frac{4-2r_{2}}{r_{1}-r_{2}} r_{1}^{n} + \frac{2r_{1} -4}{r_{1}-r_{2}} r_{2}^{n} =4 \frac{r_{1}^{n}-r_{2}^{n}}{r_{1}-r_{2}} -2k\frac{r_{1}^{n-1}-r_{2}^{n-1}}{r_{1}-r_{2}}. $$
\end{proof}
The binomial transform $B_{k}$ can be seen as the coefficients of the power series which is called the generating function. Therefore, if $b_{k}(x)$ is the generating function, then we can write
$$b_{k}(x)=\sum_{i=0}^{\infty} b_{k,i}x^{i} = b_{k,0}+b_{k,1}x+b_{k,2}x^{2}+\cdots.$$
And then,
\begin{align*}
(k+2)x b_{k}(x)=&(k+2)b_{k,0}x+(k+2)b_{k,1}x^2 + (k+2)b_{k,2}x^{3}+\cdots,\\
kx^{2}b_{k}(x)=&k b_{k,0}x^{2} + k b_{k,1}x^{3} + k b_{k,2} x^{4} + \cdots.
\end{align*}
Since $b_{k,n+1} - (k+2)b_{k,n} + k b_{k,n-1} = 0$, $b_{k,0}=2$, and $b_{k,1}=4$, then we have
\begin{align*}
&(1-(k+2)x+kx^{2}) b_{k} (x)\\
=& b_{k,0} + (b_{k,1} - (k+2)b_{k,0})x + (b_{k,2} - (k+2)b_{k,1} + kb_{k,0} )x^{2} + \cdots\\
=& b_{k,0} + (b_{k,1}-(k+2)b_{k,0})x\\
=&2+(4-(k+2)2)x = 2-2kx.
\end{align*}
Hence, the generating function for the binomial transform of the modified $k$-Fibonacci-like sequence $\left(b_{k,n}\right)_{n\geq0}$ is
$$b_{k}(x) = \frac{2(1-2kx)}{1-(k+2)x+kx^{2}}.$$
| 4,075 | 10,467 |
en
|
train
|
0.4978.1
|
\section{The \texorpdfstring{$k$}{Lg}-binomial transform of the modified \texorpdfstring{$k$}{Lg}-Fibonacci-like sequence }
The $k$-binomial transform of the modified $k$-Fibonacci-like sequence $\left(M_{k,n}\right)_{n\geq0}$ is denoted by $W_{k}=\left(w_{k,n}\right)_{n\geq0}$ where
\begin{displaymath}
w_{k,n} =
\begin{cases}
\sum_{i=0}^{n}\binom{n}{i} k^{n}M_{k,i}, & \text{ for } k\ne 0 \text{ or } n\ne 0; \\
0, & \text{ if } k=0 \text{ and } n=0.
\end{cases}
\end{displaymath}
The first $k$-binomial transforms are as follows:
\begin{align*}
W_{1}=&\left\{2, 4, 10, 26, 68, 178, \ldots \right\} : A052995-\{0\} \text{ or } A055819-\{1\}\\
W_{2}=&\left\{2, 8, 96, 320, 1088, 3712, \ldots \right\}\\
W_{3}=&\left\{2, 12, 378, 1566, 6696, 28782, \ldots \right\}\\
W_{4}=&\left\{2, 16, 1024, 5120, 26624, \ldots \right\}\\
W_{5}=&\left\{2, 20, 2250, 13250, 81500, \ldots \right\}
\end{align*}
Note that the $1$-binomial transform $W_{1}$ coincides with the binomial transform $B_{1}$.
Note that
$$w_{k,n} = \sum_{i=0}^{n}\binom{n}{i}k^{n} M_{k,i} = k^{n} \sum_{i=0}^{n}\binom{n}{i} M_{k,i}= k^{n}b_{k,n},$$
$$\text{and so } w_{k,n+1} = k^{n+1}\sum_{i=0}^{n}\binom{n}{i}\left(M_{k,i} + M_{k,i+1}\right)$$
from Lemma \ref{binomial_T}
\begin{theorem}
The $k$-binomial transform of the modified $k$-Fibonacci-like sequence $W_{k}=\left(w_{k,n}\right)_{n\geq0}$ satisfies the recurrence relation
$$w_{k,n+1} =k(k+2)w_{k,n} - k^{3} w_{k,n-1} ~\text{for}~n\ge1$$
with the initial conditions $w_{k,0} = 2$, $w_{k,1} = 4k$.
\end{theorem}
\begin{proof}
By Theorem \ref{binomial_Ta}, we can easily obtain the following:
\begin{align*}
w_{k,n+1}&=k^{n+1} b_{k,n+1}\\
&=k^{n+1} \left[ (k+2)b_{k,n}-k b_{k,n-1}\right]\\
&=k^{n+1}(k+2)b_{k,n} - k^{n+2} b_{k,n-1}\\
&=k(k+2)w_{k,n} - k^{3} w_{k,n-1}
\end{align*}
\end{proof}
Similarly, Binet's formula for the $k$-binomial transform of the modified $k$-Fibonacci-like sequence is the following:
\begin{theorem}
Binet's formula for the $k$-binomial transform of the modified $k$-Fibonacci-like sequence is given by
$$w_{k,n}= 4\frac{s_{1}^{n}-s_{2}^{n}}{s_{1}-s_{2}}- 2k\frac{s_{1}^{n-1}-s_{2}^{n-1}}{s_{1}-s_{2}},$$
where $s_{1}$ and $s_{2}$ are the roots of the characteristic equation $x^{2}-k(k+2)x+k^{3}=0$, and $s_{1}>s_{2}$.
\end{theorem}
\begin{proof}
The proof is same as that of the binomial transform, which is in Theorem \ref{binet_T}.
\end{proof}
Similarly, the generating function for the $k$-binomial transform of the modified $k$-Fibonacci-like sequence is
$$w_{k}(x) = \frac{2(1-k^{2}x)}{1-k(k+2)x+k^{3}x^{2}}.$$
\section{The rising \texorpdfstring{$k$}{Lg}-binomial transform of the modified \texorpdfstring{$k$}{Lg}-Fibonacci-like sequence }
The rising $k$-binomial transform of the modified $k$-Fibonacci-like sequence $\left(M_{k,n}\right)_{n\geq0}$ is denoted by $R_{k}=\left(r_{k,n}\right)_{n\geq0}$ where
\begin{displaymath}
r_{k,n} = \begin{cases}
\sum_{i=0}^{n}\binom{n}{i} k^{i}M_{k,i}, & \text{ for } k\ne 0 \text{ or } n\ne 0; \\
0, & \text{ if } k=0 \text{ and } n=0.
\end{cases}
\end{displaymath}
The first rising $k$-binomial transforms are as follows:
\begin{align*}
R_{1}=&\left\{2, 4, 10, 26, 68, 178, \ldots \right\} : A052995-\{0\} \text{ or } A055819-\{1\}\\
R_{2}=&\left\{2, 6, 34, 198, 1154, 6726, \ldots \right\}\\
R_{3}=&\left\{2, 8, 86, 938, 10232, \ldots \right\}\\
R_{4}=&\left\{2, 10, 178, 3194, 57314, \ldots \right\}\\
R_{5}=&\left\{2, 12, 322, 8682, 234092, \ldots \right\}
\end{align*}
\begin{lemma}\label{rbinomial_T}
For any integer $n\ge0$ and $k\ne0$,
$$r_{k,n} = \sum_{i=0}^{n}\binom{n}{i}k^{i} M_{k,i} = M_{k,2n}.$$
\end{lemma}
\begin{proof}
This identity coincides with Theorem 4.10 in \cite{YK}.
\end{proof}
\begin{theorem}\label{rbinomial_Ta}
The rising $k$-binomial transform of the modified $k$-Fibonacci-like sequence $R_{k}=\left(r_{k,n}\right)_{n\geq0}$ satisfies the recurrence relation
$$r_{k,n+1} = (k^{2}+2)r_{k,n} - r_{k,n-1} ~\text{for}~n\ge1$$
with the initial conditions $r_{k,0} = 2$, $r_{k,1} = 2k+2$.
\end{theorem}
\begin{proof}
From the definition of the modified $k$-Fibonacci-like sequence, we obtain
\begin{align*}
M_{k,2n+2}=&kM_{k,2n+1}+M_{k,2n}\\
=&k\left(kM_{k,2n}+M_{k,2n-1}\right)+M_{k,2n}\\
=&(k^{2}+1)M_{k,2n}+kM_{k,2n-1}\\
=&(k^{2}+1)M_{k,2n}+M_{k,2n}-M_{k,2n-2}\\
=&(k^{2}+2)M_{k,2n}-M_{k,2n-2}.
\end{align*}
By Lemma \ref{rbinomial_T}, since $r_{k,n}=M_{k,2n}$, then we have
$$r_{k,n+1}=(k^{2}+2)r_{k,n}-r_{k,n-1}.$$
\end{proof}
Similarly, Binet's formula for the rising $k$-binomial transform of the modified $k$-Fibonacci-like sequence is the following:
\begin{theorem}
Binet's formula for the rising $k$-binomial transform of the modified $k$-Fibonacci-like sequence is given by
$$r_{k,n}= (2k+2)\frac{t_{1}^{n}-t_{2}^{n}}{t_{1}-t_{2}}- 2\frac{t_{1}^{n-1}-t_{2}^{n-1}}{t_{1}-t_{2}},$$
where $t_{1}$ and $t_{2}$ are the roots of the characteristic equation $x^{2}-(k^{2}+2)x+1=0$, and $t_{1}>t_{2}$.
\end{theorem}
\begin{proof}
The proof is same as that of the binomial transform, which is in Theorem \ref{binet_T}.
\end{proof}
Similarly, the generating function for the rising $k$-binomial transform of the modified $k$-Fibonacci-like sequence is
$$
r_{k}(x) = \frac{2-(2k^2 - 2k +2)x}{1-(k^{2}+2)x+x^{2}}.
$$
| 2,540 | 10,467 |
en
|
train
|
0.4978.2
|
\section{The rising \texorpdfstring{$k$}{Lg}-binomial transform of the modified \texorpdfstring{$k$}{Lg}-Fibonacci-like sequence }
The rising $k$-binomial transform of the modified $k$-Fibonacci-like sequence $\left(M_{k,n}\right)_{n\geq0}$ is denoted by $R_{k}=\left(r_{k,n}\right)_{n\geq0}$ where
\begin{displaymath}
r_{k,n} = \begin{cases}
\sum_{i=0}^{n}\binom{n}{i} k^{i}M_{k,i}, & \text{ for } k\ne 0 \text{ or } n\ne 0; \\
0, & \text{ if } k=0 \text{ and } n=0.
\end{cases}
\end{displaymath}
The first rising $k$-binomial transforms are as follows:
\begin{align*}
R_{1}=&\left\{2, 4, 10, 26, 68, 178, \ldots \right\} : A052995-\{0\} \text{ or } A055819-\{1\}\\
R_{2}=&\left\{2, 6, 34, 198, 1154, 6726, \ldots \right\}\\
R_{3}=&\left\{2, 8, 86, 938, 10232, \ldots \right\}\\
R_{4}=&\left\{2, 10, 178, 3194, 57314, \ldots \right\}\\
R_{5}=&\left\{2, 12, 322, 8682, 234092, \ldots \right\}
\end{align*}
\begin{lemma}\label{rbinomial_T}
For any integer $n\ge0$ and $k\ne0$,
$$r_{k,n} = \sum_{i=0}^{n}\binom{n}{i}k^{i} M_{k,i} = M_{k,2n}.$$
\end{lemma}
\begin{proof}
This identity coincides with Theorem 4.10 in \cite{YK}.
\end{proof}
\begin{theorem}\label{rbinomial_Ta}
The rising $k$-binomial transform of the modified $k$-Fibonacci-like sequence $R_{k}=\left(r_{k,n}\right)_{n\geq0}$ satisfies the recurrence relation
$$r_{k,n+1} = (k^{2}+2)r_{k,n} - r_{k,n-1} ~\text{for}~n\ge1$$
with the initial conditions $r_{k,0} = 2$, $r_{k,1} = 2k+2$.
\end{theorem}
\begin{proof}
From the definition of the modified $k$-Fibonacci-like sequence, we obtain
\begin{align*}
M_{k,2n+2}=&kM_{k,2n+1}+M_{k,2n}\\
=&k\left(kM_{k,2n}+M_{k,2n-1}\right)+M_{k,2n}\\
=&(k^{2}+1)M_{k,2n}+kM_{k,2n-1}\\
=&(k^{2}+1)M_{k,2n}+M_{k,2n}-M_{k,2n-2}\\
=&(k^{2}+2)M_{k,2n}-M_{k,2n-2}.
\end{align*}
By Lemma \ref{rbinomial_T}, since $r_{k,n}=M_{k,2n}$, then we have
$$r_{k,n+1}=(k^{2}+2)r_{k,n}-r_{k,n-1}.$$
\end{proof}
Similarly, Binet's formula for the rising $k$-binomial transform of the modified $k$-Fibonacci-like sequence is the following:
\begin{theorem}
Binet's formula for the rising $k$-binomial transform of the modified $k$-Fibonacci-like sequence is given by
$$r_{k,n}= (2k+2)\frac{t_{1}^{n}-t_{2}^{n}}{t_{1}-t_{2}}- 2\frac{t_{1}^{n-1}-t_{2}^{n-1}}{t_{1}-t_{2}},$$
where $t_{1}$ and $t_{2}$ are the roots of the characteristic equation $x^{2}-(k^{2}+2)x+1=0$, and $t_{1}>t_{2}$.
\end{theorem}
\begin{proof}
The proof is same as that of the binomial transform, which is in Theorem \ref{binet_T}.
\end{proof}
Similarly, the generating function for the rising $k$-binomial transform of the modified $k$-Fibonacci-like sequence is
$$
r_{k}(x) = \frac{2-(2k^2 - 2k +2)x}{1-(k^{2}+2)x+x^{2}}.
$$
\section{The falling \texorpdfstring{$k$}{Lg}-binomial transform of the modified \texorpdfstring{$k$}{Lg}-Fibonacci-like sequence }
The falling $k$-binomial transform of the modified $k$-Fibonacci-like sequence $\left(M_{k,n}\right)_{n\geq0}$ is denoted by $F_{k}=\left(f_{k,n}\right)_{n\geq0}$ where
\begin{displaymath}
f_{k,n} =\begin{cases}
\sum_{i=0}^{n}\binom{n}{i} k^{n-i}M_{k,i}, & \text{ for } k\ne 0 \text{ or } n\ne 0; \\
0, & \text{ if } k=0 \text{ and } n=0.
\end{cases}
\end{displaymath}
The first falling $k$-binomial transforms are as follows:
\begin{align*}
F_{1}=&\left\{2, 4, 10, 26, 68, 178, \ldots \right\} : A052995-\{0\} \text{ or } A055819-\{1\}\\
F_{2}=&\left\{2, 6, 22, 90, 386, 1686, \ldots \right\}\\
F_{3}=&\left\{2, 8, 38, 206, 1208, 7370, \ldots \right\}\\
F_{4}=&\left\{2, 10, 58, 386, 2834, 22042 \ldots \right\}\\
F_{5}=&\left\{2, 12, 82, 642, 5612, 52722 \ldots \right\}
\end{align*}
\begin{lemma}\label{fbinomial_T}
The falling $k$-binomial transform of the modified $k$-Fibonacci-like sequence satisfies the relation
$$f_{k,n+1}-kf_{k,n} = \sum_{i=0}^{n}\binom{n}{i} k^{n-i}M_{k,i+1}.$$
\end{lemma}
\begin{proof}
The proof is similar to the proof of Lemma \ref{binomial_T}. And, we obtain
\begin{align*}
f_{k,n+1} - kf_{k,n}&= \sum_{i=0}^{n+1} \binom{n+1}{i}k^{n+1-i} M_{k,i}-\sum_{i=0}^{n}\binom{n}{i} k^{n+1-i}M_{k,i}\\
&=\sum_{i=1}^{n}\left[\binom{n+1}{i}-\binom{n}{i} \right]k^{n+1-i}M_{k,i} + M_{k,n+1} \\
&=\sum_{i=1}^{n}\binom{n}{i-1}k^{n+1-i}M_{k,i}+ M_{k,n+1}\\
&=\sum_{i=0}^{n-1}\binom{n}{i}k^{n-i}M_{k,i+1} + \binom{n}{n}M_{k,n+1}=\sum_{i=0}^{n}\binom{n}{i}k^{n-i}M_{k,i+1}.
\end{align*}
\end{proof}
Note that $f_{k,n+1} = \sum_{i=0}^{n}\binom{n}{i}\left( k^{n+1-i}M_{k,i}+ k^{n-i}M_{k,i+1}\right)$.
\begin{theorem}\label{fbinomial_Ta}
The falling $k$-binomial transform of the modified $k$-Fibonacci-like sequence $F_{k}=\left(f_{k,n}\right)_{n\geq0}$ satisfies the recurrence relation
$$f_{k,n+1} = 3kf_{k,n} - (2k^{2}-1)f_{k,n-1} ~\text{for}~n\ge1$$
with the initial conditions $f_{k,0} = 2$, $f_{k,1} = 2k+2$.
\end{theorem}
\begin{proof}
By Lemma \ref{fbinomial_T}, since $f_{k,n+1} = \sum_{i=0}^{n}\binom{n}{i}\left(k^{n+1-i}M_{k,i}+k^{n-i}M_{k,i+1}\right)$, then we have
\begin{align*}
f_{k,n+1}=&\sum_{i=0}^{n}\binom{n}{i}k^{n-i}\left(k M_{k,i}+M_{k,i+1}\right)\\
=&\sum_{i=1}^{n}\binom{n}{i}k^{n-i}\left(2kM_{k,i}+M_{k,i-1}\right)+k^{n}\left(k M_{k,0}+M_{k,1}\right)\\
=&2k\sum_{i=1}^{n}\binom{n}{i}k^{n-i}M_{k,i} +\sum_{i=1}^{n}\binom{n}{i}k^{n-i}M_{k,i-1} + k^{n}\left(kM_{k,0}+M_{k,1}\right)\\
=&2k\sum_{i=0}^{n}\binom{n}{i}k^{n-i}M_{k,i} +\sum_{i=1}^{n}\binom{n}{i}k^{n-i}M_{k,i-1}\\
& + k^{n}\left(kM_{k,0}+M_{k,1}-2kM_{k,0}\right)\\
=&2kf_{k,n} +\sum_{i=1}^{n}\binom{n}{i}k^{n-i}M_{k,i-1} + k^{n}\left(M_{k,1}-kM_{k,0}\right).
\end{align*}
On the other hand, in the case of $\binom{n-1}{n}=0$, we can obtain the following:
\begin{align*}
kf_{k,n} =&2k^2f_{k,n-1}+\sum_{i=1}^{n-1}\binom{n-1}{i}k^{n-i}M_{k,i-1}+k^{n}\left(M_{k,1}-kM_{k,0}\right)\\
=&2k^2f_{k,n-1}-\left[f_{k,n-1}-\sum_{i=0}^{n-1}\binom{n-1}{i}k^{n-1-i}M_{k,i}\right]\\
&+\sum_{i=0}^{n-2}\binom{n-1}{i+1}k^{n-1-i}M_{k,i}+k^{n}\left(M_{k,1}-kM_{k,0}\right)\\
=&\left(2k^{2}-1\right)f_{k,n-1}+\sum_{i=0}^{n-1}\left[\binom{n-1}{i}+\binom{n-1}{i+1}\right]k^{n-1-i}M_{k,i}\\
&+k^{n}\left(M_{k,1}-kM_{k,0}\right)\\
=&\left(2k^{2}-1\right)f_{k,n-1}+\sum_{i=0}^{n-1}\binom{n}{i+1}k^{n-1-i}M_{k,i}+k^{n}\left(M_{k,1}-kM_{k,0}\right)\\
=&\left(2k^{2}-1\right)f_{k,n-1}+\sum_{i=1}^{n}\binom{n}{i}k^{n-i}M_{k,i-1}+k^{n}\left(M_{k,1}-kM_{k,0}\right).
\end{align*}
Based on the above two identities, this study draws the below formulas.
$$f_{k,n+1}-2kf_{k,n}=kf_{k,n}-\left(2k^{2}-1\right)f_{k,n-1},$$
and so
$$f_{k,n+1} = 3kf_{k,n} - (2k^{2}-1) f_{k,n-1}.$$
\end{proof}
Similarly, Binet's formula for the falling $k$-binomial transform of the modified $k$-Fibonacci-like sequence is the following:
\begin{theorem}
Binet's formula for the falling $k$-binomial transform of the modified $k$-Fibonacci-like sequence is given by
$$f_{k,n}= (2k+2)\frac{u_{1}^{n}-u_{2}^{n}}{u_{1}-u_{2}}- 2\frac{u_{1}^{n-1}-u_{2}^{n-1}}{u_{1}-u_{2}},$$
where $u_{1}$ and $u_{2}$ are the roots of the characteristic equation $x^{2}-3kx+(2k^{2}-1)=0$, and $u_{1}>u_{2}$.
\end{theorem}
\begin{proof}
The proof is same as that of the binomial transform, which is in Theorem \ref{binet_T}.
\end{proof}
Similarly, the generating function for the falling $k$-binomial transform of the modified $k$-Fibonacci-like sequence is
$$
f_{k}(x) = \frac{2+(2-4k)x}{1-3kx+(2k^{2}-1)x^{2}}.
$$
\section{Conclusion}
This paper applies the four transforms- the binomial, $k$-binomial, rising $k$-binomial and falling $k$-binomial transforms- to the modified $k$-Fibonacci-like sequence. Although most of the results are rather similar to those of the previous sequences, this study is still meaningful as they introduce several new approaches and methods to derive the formulas. This study, furthermore, examines Binet's formulas and generating functions of the four transforms.
\end{document}
| 3,852 | 10,467 |
en
|
train
|
0.4979.0
|
\begin{document}
\title{A note on counting flows in signed graphs}
\begin{abstract}
Tutte initiated the study of nowhere-zero flows and proved the following fundamental theorem: For every graph $G$ there is a polynomial $f$ so that
for every abelian group $\Gamma$ of order $n$, the number of nowhere-zero $\Gamma$-flows in $G$ is $f(n)$. For signed graphs (which have bidirected
orientations), the situation is more subtle. For a finite group~$\Gamma$, let $\epsilon_2(\Gamma)$ be the largest integer $d$ so that $\Gamma$ has a
subgroup isomorphic to~$\mathbb{Z}_2^d$. We prove that for every signed graph $G$ and $d \ge 0$ there is a polynomial $f_d$ so that $f_d(n)$ is the
number of nowhere-zero $\Gamma$-flows in $G$ for every abelian group~$\Gamma$ with $\epsilon_2(\Gamma) = d$ and $|\Gamma| = 2^d n$. Beck and
Zaslavsky~\cite{BZ06} had previously established the special case of this result when $d=0$ (i.e., when $\Gamma$ has odd order).
\end{abstract}
\section{Introduction}
Throughout the paper we permit graphs to have both multiple edges and loops. Let $G$ be a graph equipped with an orientation of its edges and let $\Gamma$ be an abelian
group written additively. We say that a function $\phi : E(G) \rightarrow \Gamma$ is a $\Gamma$-\emph{flow} if it satisfies the following equation (Kirchhoff's law) for
every vertex $v \in V(G)$.
\[ \sum_{e \in \delta^+(v) } \phi(e) - \sum_{e \in \delta^-(v)} \phi(e) = 0, \]
where $\delta^+(v)$ ($\delta^-(v))$ denote the set of edges directed away from (toward) the vertex $v$.
We say that $\phi$ is \emph{nowhere-zero} if $0 \not\in \phi(E(G))$.
If $\phi$ is a $\Gamma$-flow and we switch the direction of an edge $e$ of $G$, we may obtain a
new flow by replacing~$\phi(e)$ by its additive inverse. Note that this does not affect the property of being nowhere-zero
. So, in particular, whenever some
orientation of $G$ has a nowhere-zero $\Gamma$-flow, the same will be true for every orientation. More generally, the number of nowhere-zero $\Gamma$-flows
in two different orientations of $G$ will always be equal, and we denote this important quantity by $\Phi(G,\Gamma)$.
Tutte~\cite{Tutte54} introduced the concept of a nowhere-zero $\Gamma$-flow and proved the following key theorem about counting them.
\begin{theorem}[Tutte~\cite{Tutte54}]
\label{tutte}
Let $G$ be a graph.
\begin{enumerate}
\item If $\Gamma $and $\Gamma'$ are abelian groups with $|\Gamma| = |\Gamma'|$, then $\Phi(G,\Gamma) = \Phi(G,\Gamma')$.
\item There exists a polynomial $f$ so that $\Phi(G,\Gamma) = f(n)$ for every abelian group~$\Gamma$ with $|\Gamma| = n$.
\end{enumerate}
\end{theorem}
Our interest in this paper is in counting nowhere-zero $\Gamma$-flows in signed graphs, so we proceed with an introduction to this setting. A \emph{signature} of a graph $G$ is a function $\sigma :
E(G) \rightarrow \{-1,1\}$. We say that a subgraph $H$ is \emph{positive} if $\prod_{e \in E(H)} \sigma(e) = 1$ and \emph{negative} if this product is $-1$, in particular we call
an edge $e$ \emph{positive} (\emph{negative}) if the graph $e$ induces is positive (negative). We say that two signatures $\sigma$ and $\sigma'$ are \emph{equivalent} if
the symmetric difference of the negative edges of $\sigma$ and the negative edges of $\sigma'$ is an edge-cut of $G$. Let us note that two signatures are equivalent if
and only if they give rise to the same set of negative cycles; this instructive exercise was observed by Zaslavsky~\cite{Zaslavsky}. Observe that if $\sigma$ is a signature and $C$ is an
edge-cut of $G$, then we may form a new signature $\sigma'$ equivalent to $\sigma$ by the following rule:
\[ \sigma'(e) = \left\{ \begin{array}{cl}
\sigma(e) & \mbox{if $e \not\in C$} \\
- \sigma(e) & \mbox{if $e \in C$.}
\end{array} \right. \]
So, in particular, for any signature $\sigma$ and a non-loop edge $e$, there is a signature $\sigma'$ equivalent to $\sigma$ with $\sigma'(e) = 1$. We define a
\emph{signed graph} to consist of a graph~$G$ together with a signature $\sigma_G$. As suggested by our terminology, we will only be interested in properties of signed
graphs which are invariant under changing to an equivalent signature.
Following Bouchet~\cite{Bouchet} we now introduce a notion of a half-edge so as to orient a signed graph. For
every graph $G$ we let $H(G)$ be a set of \emph{half edges} obtained from the set of edges $E(G)$ as follows. Each edge $e=uv$ contains two distinct half edges $h$ and $h'$ incident with $u$ and $v$, respectively. Note that if $u=v$, $e$ is a loop containing two half-edges both incident with $u$.
For a half-edge $h \in H(G)$, we let $e_h$~denote the edge of~$G$ that contains~$h$.
To orient
a signed graph $G$ we will equip each half edge with an arrow and direct it either toward or away from its incident vertex. Formally, we define an \emph{orientation} of a
signed graph $G$ to be a function $\tau : H(G) \rightarrow \{-1,1\}$ with the property that for every edge $e$ containing the half edges $h,h'$ we have
\[ \tau(h) \tau(h') = - \sigma_G(e). \]
We think of a half edge $h$ with $\tau(h) = 1$ ($\tau(h) = -1$) to be directed toward (away from) its endpoint. Note that in the case when $\sigma_G$ is identically 1, both arrows on every half edge are oriented consistently, and this aligns with the usual notion of orientation of an (ordinary) graph.
\begin{figure}
\caption{Orientations of edges in a signed graph}
\end{figure}
We define a $\Gamma$-\emph{flow} in such an orientation of a signed graph $G$ to be a function $\phi : E(G) \rightarrow \Gamma$ which obeys the following rule at every vertex $v$
\[ \sum_{ \{h \in H(G) \mid h \sim v \} } \tau(h) \phi(e_h). \]
As before, we call $\phi$ \emph{nowhere-zero} if $0 \not\in \phi(E(G))$. Note that in the case when $\sigma_{G}$ is identically 1, this notion agrees with our earlier notion of a (nowhere-zero) flow in an orientation
of a graph. Also note that, as before, we may obtain a new flow by reversing the orientation of an edge $e$ (i.e., by changing the sign of $\tau(h)$ for both half edges
contained in $e$) and then replacing $\phi(e)$ by its additive inverse. This new flow is nowhere-zero if and only if the original flow had this
property. In light of this, we may now define $\Phi(G,\Gamma)$ to be the number of nowhere-zero $\Gamma$-flows in some (and thus every) orientation
of the signed graph~$G$.
As we remarked, we are only interested in properties of signed graphs which are invariant under changing to an equivalent signature, and this is indeed the case for $\Phi(G,\Gamma)$. To see this, suppose that $\tau$ is an orientation of the signed graph $G$ and that $\phi$ is a nowhere-zero $\Gamma$-flow for this orientation. Assume that the signature $\sigma'_G$ is obtained from $\sigma_G$ by flipping the sign of every edge in the edge-cut $\delta(X)$ (here $X \subseteq V(G)$ and $\delta(X)$ is the set of edges with exactly one end in $X$). Modify the orientation $\tau$ to obtain a new orientation $\tau'$ by switching the sign of $h$ for every half edge incident with a vertex of $X$. It is straightforward to verify that $\tau'$ is now an orientation of the signed graph given by $G$ and $\sigma_G'$, and $\phi$ is still a $\Gamma$-flow for this new oriented signed graph.
Beck and Zaslavsky~\cite{BZ06} considered the problem of counting nowhere-zero flows in signed graphs and proved the following analogue of Tutte's Theorem~\ref{tutte} for
groups of odd order.
\begin{theorem}[Beck and Zaslavsky~\cite{BZ06}]
Let $G$ be a signed graph.
\begin{enumerate}
\item If $\Gamma,\Gamma'$ are abelian groups and $|\Gamma| = |\Gamma'|$ is odd, then $\Phi(G,\Gamma) = \Phi(G,\Gamma')$.
\item There exists a polynomial $f$ so that for every odd integer $n$, every abelian group $\Gamma$ with $|\Gamma|=n$ satisfies
$f(n) = \Phi(G,\Gamma)$.
\end{enumerate}
\end{theorem}
The purpose of this note is to extend the above theorem to allow for groups of even order by incorporating another parameter. For any finite group $\Gamma$ we define
$\epsilon_2(\Gamma)$ to be the largest integer $d$ so that $\Gamma$ contains a subgroup isomorphic to $\mathbb{Z}_2^d$ (here $\mathbb{Z}_2 = \mathbb{Z}/2\mathbb{Z}$).
\begin{theorem}
\label{maingroup}
Let $G$ be a signed graph and let $d \ge 0$.
\begin{enumerate}
\item If $\Gamma$ and $\Gamma'$ are abelian groups with $|\Gamma| = |\Gamma'|$ and $\epsilon_2(\Gamma) = \epsilon_2(\Gamma')$, then $\Phi(G,\Gamma) = \Phi(G,\Gamma')$.
\item For every nonnegative integer $d$, there exists a polynomial $f_d$ so that $\Phi(G,\Gamma) = f_d(n)$ for every abelian group $\Gamma$ with $\epsilon(\Gamma) = d$
and $|\Gamma| = 2^dn$.
\end{enumerate}
\end{theorem}
The proof of the above theorem is a straightforward adaptation of Tutte's original method, so it may seem surprising it was not proved earlier. The cause of this may be
some confusion over whether or not it was already done. The paper by Beck and Zaslavsky~\cite{BZ06} includes a footnote with the following comment: ``Counting of flows in groups of
even order has been completely resolved by Cameron et al.''. This refers to an interesting paper of Cameron, Jackson, and Rudd~\cite{CJR} which concerns problems such as counting
the number of orbits of nowhere-zero flows under a group action. However, the methods developed in this paper only apply to counting nowhere-zero flows in (ordinary) graphs
for the reason that the incidence matrix of an oriented graph is totally unimodular. Since the corresponding incidence matrices of oriented signed graphs are generally
not totally unimodular (and not equivalent to such matrices under elementary row and column operations), our result does not follow from Cameron et al.
Before giving the proof of our theorem, let us pause to make one further comment about nowhere-zero flows in signed graphs which consist of a single loop edge~$e$. For a
loop edge $e$ with signature $1$ we may obtain a nowhere-zero flow by assigning any nonzero value $x$ to the edge $e$. So, two groups $\Gamma$ and $\Gamma'$ will have
the same number of nowhere-zero flows for this graph if and only if $|\Gamma| = |\Gamma'|$. If, on the other hand, our graph consists of a single loop edge $e$ which is
negative, then the number of nowhere-zero $\Gamma$-flows in this graph will be precisely the number of nonzero group elements $y$ for which $2y = 0$ (i.e., the number of
elements of order~2). All elements of order~2 form (together with the zero element) a subgroup isomorphic to~$\mathbb{Z}_2^{\epsilon_2(\Gamma)}$, thus
this number is precisely $2^{{\epsilon}_2(\Gamma)} - 1$. So, in order for two groups $\Gamma$ and $\Gamma'$ to have the same number of
nowhere-zero flows on this graph, they
must satisfy $\epsilon_2(\Gamma) = \epsilon_2(\Gamma')$. By our main theorem, two groups $\Gamma$ and $\Gamma'$ will satisfy $\Phi(G,\Gamma) = \Phi(G,\Gamma')$ for every signed graph $G$ if and only if this holds for every one edge graph. This statement is in precise analogy with the situation for flows in ordinary graphs.
\begin{figure}
\caption{Two graphs that determine $\Phi(G,\Gamma)$ for every other graph~$G$.}
\end{figure}
We close the introduction by mentioning related results about the number of integer flows.
Tutte~\cite{Tutte49} defined a nowhere-zero $n$-flow to be a $\mathbb{Z}$-valued flow that only uses values~$k$
with $0 < |k| < n$. Surprisingly, a graph has a nowhere-zero $n$-flow if and only if it has
a nowhere-zero $\mathbb{Z}_n$-flow. Let us use $\Phi(G,n)$ to denote the number of nowhere-zero $n$-flows on~$G$.
While $\Phi(G,n)$ and $\Phi(G,\mathbb{Z}_n)$ are either both zero or both nonzero, the actual values differ.
An analogical statement to the second part of Theorem~\ref{tutte} is again true, by a result of
Kochol~\cite{Kochol}; that is, $\Phi(G,n)$ is a polynomial in~$n$. His result has already been extended for bidirected graphs.
Beck and Zaslavsky~\cite{BZ06} prove that for a signed graph~$G$, $\Phi (G,n)$~is a quasipolynomial of period 1 or 2; that is,
there are polynomials~$p_0$ and~$p_1$ such that $\Phi(G,n)$ is equal to~$p_0(n)$ for even~$n$ and to~$p_1(n)$ for odd~$n$.
Both the Kochol's and the Beck and Zaslavsky's result is proved by an illustrative application of Ehrhart's theorem~\cite{Ehrhart, Sam}.
| 3,710 | 5,768 |
en
|
train
|
0.4979.1
|
\section{The proof}
\label{sec:group}
The proof of our main theorem requires the following lemma about counting certain solutions to an equation in an abelian group.
\begin{lemma}
\label{abeliancount}
Let $\Gamma$ be an abelian group with $\epsilon_2(\Gamma) = d$ and $|\Gamma| = 2^d n$. Then the number of solutions to $2x_1 + \dots + 2x_t = 0$ with
$x_1, \ldots, x_t \in \Gamma \setminus \{ 0 \}$ is given by the formula
\[
\sum_{s=0}^{t} (2^d)^s (2^d-1)^{t-s} {t \choose s} \sum_{i=1}^{s-1} (-1)^{i-1} (n-1)^{s-i} \,.
\]
\end{lemma}
\begin{proof}
We claim that for every abelian group of order $m$, the number of solutions to $x_1 + \dots + x_t = 0$ with $x_1, \ldots, x_t \neq 0$ is given by the formula
\[
\sum_{i=1}^{t-1} (-1)^{i-1} (m-1)^{t-i} \,.
\]
We prove this by induction on $t$. The base case $t=1$ holds trivially. For the inductive step, we may assume $t \ge 2$. The total number of solutions to the given
equation for which $x_1, \ldots, x_{t-1}$ are nonzero, but $x_{t}$ is permitted to have any value is exactly $(m-1)^{t-1}$ since we may choose the nonzero terms $x_1,
\ldots, x_{t-1}$ arbitrarily and then set $x_{t} = - \sum_{i=1}^{t-1} x_i$ to obtain a solution. By induction, there are exactly $\sum_{i=1}^{t-2} (-1)^{i-1}
(m-1)^{t-1-i}$ of these solutions for which $x_{t} = 0$. We conclude that the number of solutions with all variables nonzero is
\[ (m-1)^{t-1} - \sum_{i=1}^{t-2} (-1)^{i-1} (m-1)^{t-1-i} = \sum_{i=1}^{t-1} (-1)^{i-1} (m-1)^{t-i} \]
as claimed.
Now, to prove the lemma, we consider the group homomorphism $\psi : \Gamma \rightarrow \Gamma$ given by the rule $\psi (x) = x+x$. Note that the kernel of $\psi$,
denoted $ker(\psi)$, is isomorphic to $\mathbb{Z}_2^d$. Now $x_1, \ldots, x_t$ satisfy $2x_1 + \dots + 2x_t = 0$ if and only if
$\psi(x_1), \ldots, \psi(x_t)$ satisfy $\psi(x_1) + \dots + \psi(x_t) = 0$.
So, to count the number of solutions to $2x_1 + \dots + 2x_t = 0$ in $\Gamma$ with all variables nonzero, we may count all possible solutions
to $y_1 + \dots + y_t = 0$ within the group $\psi(\Gamma)$ and then, for each such solution, count the number of nonzero sequences $x_1, \ldots, x_t$ in $\Gamma$ with $
\psi(x_i) = y_i$. For every $y_i \in \psi(\Gamma)$, the pre-image $\psi^{-1}(y_i)$ is a coset of $ker(\psi)$. So the number of nonzero elements $x_i$ with $\psi(x_i) =
y_i$ will equal $2^d$ if $y_i \neq 0$ and $2^d - 1$ if $y_i = 0$. Now we will combine this with the claim proved above. For every $0 \le s \le t$, the number of
solutions to $y_1 + \dots + y_t=0$ in the group $\psi(\Gamma)$ with exactly $s$ nonzero terms is given by
\[
{t \choose s} \sum_{i=1}^{s-1} (-1)^{i-1} (n-1)^{s-i} \,.
\]
Each such solution will be the image of exactly $(2^d)^s (2^d-1)^{t-s}$ nonzero sequences $x_1, \ldots, x_t \in \Gamma$. Summing over all $s$ gives the desired formula.
\end{proof}
We also require the usual contraction-deletion formula for counting nowhere-zero flows.
\begin{observation}
\label{contdelobs}
Let $G$ be an oriented signed graph and let $e \in E(G)$ satisfy $\sigma_G(e) = 1$.
\begin{enumerate}
\item If $e$ is a loop edge, then $\Phi(G,\Gamma) = (|\Gamma|-1) \Phi({G \setminus e},\Gamma)$.
\item If $e$ is not a loop edge, then $\Phi(G,\Gamma) = \Phi({G/e},\Gamma) - \Phi({G \setminus e},\Gamma)$.
\end{enumerate}
\end{observation}
\begin{proof} The first part follows from the observation that every nowhere-zero flow in $G$ is obtained from a nowhere-zero flow in $G \setminus e$ by choosing an arbitrary nonzero value for $e$. The second part follows from the usual contraction-deletion formula for flows. Suppose $\phi$ is a nowhere-zero flow in $G / e$, and return to the original graph $G$ by uncontracting $e$. It follows from elementary considerations that there is a unique value $\phi(e)$ we can assign to $e$ so that $\phi$ is a flow. It follows that $\Phi(G/e,\Gamma)$ is precisely the number of $\Gamma$-flows in $G$ for which all edges except possibly $e$ are nonzero. This latter count is exactly $\Phi(G,\Gamma) + \Phi(G \setminus e,\Gamma)$ and this completes the proof.
\end{proof}
Equipped with these lemmas, we are ready to prove our main theorem about counting group-valued flows.
\begin{proof}[Proof of Theorem~\ref{maingroup}]
For the first part, we proceed by induction on $|E(G)|$. Our base cases will consist of one vertex graphs $G$ for which every edge has signature $-1$. In this case we
may orient $G$ so that every half-edge is directed toward its endpoint. If the edges are $e_1, \ldots, e_t$, then to find a nowhere-zero flow we need to assign each
edge~$e_i$ a nonzero value~$x_i$ so that $2x_1 + \dots + 2x_t = 0$.
By Lemma~\ref{abeliancount}, the number of ways to do this is the same for $\Gamma$ and $\Gamma'$.
For the inductive step, we may assume $G$ is connected, as otherwise the result follows by applying induction to each component. If $G$ has a loop edge $e$ with
$\sigma_G(e) = 1$, then the result follows from the previous lemma and induction on $G \setminus e$. Otherwise $G$ must have a non-loop edge $e$. By possibly switching
to an equivalent signature, we may assume that $\sigma_G(e) = 1$. Now our result follows from the previous lemma and induction on $G \setminus e$ and $G / e$.
The second part of the theorem follows by a very similar argument. In the base case when $G$ is a one vertex graph in which every edge has signature $-1$, the desired
polynomial is given by Lemma~\ref{abeliancount}. For the inductive step, we may assume $G$ is connected, as otherwise the result follows by applying induction to each
component and taking the product of these polynomials. If we are not in the base case, then $G$ must either have a loop edge with signature $1$ or a non-loop edge $e$
which we may assume has signature $1$. In either case, Observation~\ref{contdelobs} and induction yield the desired result.
\end{proof}
\end{document}
| 2,058 | 5,768 |
en
|
train
|
0.4980.0
|
\begin{document}
\title{Correlations Between Quantumness and Learning Performance in Reservoir Computing with a Single Oscillator}
\author{Arsalan~Motamedi}
\email{[email protected]}
\affiliation{Institute for Quantum Computing, Department of Physics \& Astronomy University of Waterloo, Waterloo, ON, N2L 3G1, Canada}
\author{Hadi~Zadeh-Haghighi}
\email{[email protected]}
\affiliation{Department of Physics and Astronomy, Institute for Quantum Science and Technology, Quantum Alberta, and Hotchkiss Brain Institute, University of Calgary, Calgary, AB T2N 1N4, Canada}
\author{Christoph Simon}
\email{[email protected]}
\affiliation{Department of Physics and Astronomy, Institute for Quantum Science and Technology, Quantum Alberta, and Hotchkiss Brain Institute, University of Calgary, Calgary, AB T2N 1N4, Canada}
\date{\today}
\maketitle
\section{Introduction}
The theory of quantum information processing has been thriving over the past few decades, offering various advantages, including efficient algorithms for breaking Rivest–Shamir–Adleman (RSA) encryption, exponential query complexity speed-ups, improvement of sensors and advances in metrology, and the introduction of secure communication protocols \cite{shor1999polynomial, MacQuarrie_2020,Harrow_2009, nielsen2002quantum, bennett2020quantum, degen2017quantum, simon2017towards, rivest1983cryptographic}. Nevertheless, the challenge of error correction and fault-tolerant quantum computing is still the biggest obstacle to the realization of a quantum computer. Despite threshold theorems giving the hope of fault-tolerant computation on quantum hardware \cite{aharonov1997fault, knill1998resilient, kitaev2003fault, shor1996fault}, a successful realization of such methods is only recently accomplished on intermediate-size quantum computers \cite{acharya2022suppressing}, and the implementation of a large scale quantum computer is yet to be achieved. Moreover, today's quantum hardware contain only a few tens of qubits. Hence we are in the noisy intermediate-scale quantum (NISQ) era, and it is of interest to know what tasks could be performed by such limited noisy devices that are hard to do with classical computers \cite{temme2017error, bharti2022noisy, kandala2019error, preskill2018quantum}.
In the past few years, and on the classical computing side, neuromorphic (brain-inspired) computing has shown promising results \cite{farquhar2006field, hopfield1982neural, schmidhuber2015deep, goodfellow2020generative}, most notably the celebrated artificial neural networks used in machine learning. Neuromorphic computing uses a network of neurons to access a vast class of parametrized non-linear functions. Despite being very successful in accuracy, these models are hard to train due to the need to optimize many parameters. Another obstacle in the training of such models is the vanishing gradient problem \cite{pascanu2013difficulty, basodi2020gradient}.
A subfield of brain-inspired computing, derived from recurrent neural networks, is reservoir computing, where the learning is to be performed only at the readout. Notably, this simplification - optimizing over a smaller set of parameters - allows for circumventing the problem of barren plateaus encountered in the training of recurrent neural networks. Despite such simplifications, reservoir computing still shows remarkable performance \cite{maass2002real,jaeger2004harnessing,tanaka2019recent, rohm2018multiplexed, nature1, nakajima2018reservoir}. Reservoir computing methods are often applied to temporal categorization, regression, and also time series prediction \cite{schrauwen2007overview, mammedov2022weather}. Notably, there have been successful efforts on the physical implementation of (classical) reservoir computing \cite{tanaka2019recent, kan2022physical, nakajima2018reservoir}.
More recently, the usefulness of quantum computing in the field of machine learning has been studied \cite{biamonte2017quantum, QML, schuld2015introduction}. In addition to that, there are novel attempts to introduce an appropriate quantum counterpart for classical neuromorphic (in particular reservoir) computing. There have been different reservoir models considered, which could mostly be categorized as spin-based or oscillator-based \cite{fujii2021quantum,PhysRevResearch.3.013077, luchnikov2019simulation, martinez2020information, nokkala2021gaussian} (corresponding to finite and infinite dimensional Hilbert spaces).
On the quantum reservoir computing front, there have been efforts such as \cite{PhysRevResearch.3.013077}, where one single Kerr oscillator is exploited for fundamental signal processing tasks. The approach used in \cite{nokkala2021gaussian} for quantum reservoir computing introduces non-linearity through the encoding of input signals in Gaussian states. Their approach has been proven to be universal, meaning that it can accurately approximate fading memory functions with arbitrary precision. \cite{pfeffer2022quantum} predicts time series using a spin-based model. \cite{ghosh2021quantum} exploits a network of interacting quantum reservoirs for tasks like quantum state preparation and tomography. \cite{vintskevich2022computing} proposes heuristic approaches for optimized coupling of two quantum reservoirs. An analysis of the effect of quantumness is performed in \cite{PhysRevResearch.3.013077}, where the authors consider dimensionality as a quantum resource. The effects of quantumness have been studied more concretely in \cite{gotting2023exploring}, where they consider an Ising model as their reservoir and show that the dimension of the phase space used in the computation is linked to the system's entanglement. Also, \cite{pfeffer2022quantum} demonstrates that quantum entanglement might enhance reservoir computing. Specifically, they show that a quantum model with a few strongly entangled qubits could perform as well as a classical reservoir with thousands of perceptions, and moreover, performance declines when the reservoir is decomposed into separable subsets of qubits.
In this work, we explore how well a single quantum non-linear oscillator performs time series predictions. In particular, we are focused on the prediction of the Mackey-Glass (MG) time series \cite{mackey1977oscillation}, which is often used as a benchmark task in reservoir computing. We then investigate the role of quantumness in the quantum learning model. We use two quantumness measures, the Wigner negativity and the Lee-Jeong measure. The latter was originally introduced as a measure for macroscopicity, but here we demonstrate that it is a non-classicality measure as well. Using our approaches, we observe that quantumness correlates with learning performance, and that it does so more strongly than dimensionality.
The paper is organized as follows. In \cref{sec:rc} we introduce the reservoir computing method used in this work. \cref{sec:pts} shows the performance of the method. \cref{sec:q} analyzes the effect of quantumness measures on performance. Finally, \cref{sec:discussion} provides a discussion of the findings.
\begin{figure}
\caption{A schematic representation of the computation model, either classical or quantum. In the learning process, we find the proper $A$ that is to predict the sample set $G$, based on the outputs of the reservoir. The dynamics of the reservoir is controlled via sample set $F$.}
\label{fig:Schem}
\end{figure}
\begin{figure*}
\caption{Performance of the trained quantum model.}
\label{fig:MG1}
\label{fig:MG2}
\label{fig:1}
\end{figure*}
\section{Reservoir Computing}\label{sec:rc}
In this work, we use the approach introduced by Govia et al. \cite{PhysRevResearch.3.013077} to feed the input signal to the reservoir by manipulating the Hamiltonian. We describe the state of our quantum and classical systems using $\hat\rho$ and $a$, respectively. Let us consider a time interval of length $\Delta t$ that has been discretized with $N$ equidistant points $t_1< \cdots< t_N$. We can expand this set by considering $M$ future values $t_{N+1}< \cdots< t_{N+M}$, which are also equidistantly distributed. It is worth noting that the interval $t_N - t_1$ is equal to $\Delta t$. Our objective is to estimate a set of future values of the function $f$, which is denoted by $G = \left(f(t_{i})\right)_{i=N+1}^M$, given the recent observations $F=\left(f(t_i)\right)_{i=1}^N$. To this end, we evolve our system so that $\hat\rho(t_j)$ (respectively, $a(t_j)$) depends on $f(t_1), \cdots, f(t_j)$. We then obtain observations $s(t_i) = \langle\hat{O} \hat\rho(t_i)\rangle$ for an observable $\hat O$ (respectively, $s(t_i) = \left(h \circ a\right){t_i}$ for a function $h$). Finally, we perform a linear regression on $\left( s(t_i)\right)_{i=1}^N$ to predict $G$. In what follows we elaborate on the system's evolution in both classical and quantum cases.
For the classical reservoir, we consider the following evolution
\begin{align}\label{eq:classicEv}
\dot{a} = -iK(1+2\, |a|^2)a - \frac{\kappa}{2} a - i\alpha f(t)
\end{align}
with $K$, $\kappa$, and $\alpha$ being the reservoir's natural frequency, dissipation rate, and the amplifier of the input signal, respectively. We let $s(t) = \tanh\left(\Re{a(t)}\right)$. The quantum counterpart of the evolution described by \cref{eq:classicEv} is the following Markovian dynamics
\cite{PhysRevResearch.3.013077,gardiner2004quantum}
\begin{equation}
\begin{split}
\frac{d}{dt}\hat\rho(t) &= -i [\hat{H}(t), \hat\rho(t)] + \kappa\, \mathcal{D}_a(\hat\rho)\\
\text{where }\, \hat{H}(t) &= K\, \hat{N}^2 + \alpha \, f(t)\, \hat{X}, \\ \text{and }\, \mathcal{D}_a(\hat\rho) &= \hat{a} \hat\rho \hat{a}^\dagger - \{\hat{N}, \hat\rho\}/2. \label{eq:ev-2}
\end{split}
\end{equation}
Note that we use the properly scaled parameters, such that the uncertainty principle becomes $\Delta x \, \Delta p \geq 1/2$. The parameters $(\alpha, \kappa, K)$ are the same as above (in \eqref{eq:classicEv}). The operators $\hat X$, $\hat a$, and $\hat N$ represent the position, annihilation, and the number operator respectively. We let $s(t) = \tr ( \hat\rho(t) \,\tanh \hat X)$. Utilizing the non-linear quantum evolution \eqref{eq:ev-2} as our quantum reservoir, we perform the learning of the MG time series.
Let us now provide details on the process of linear regression. Our objective is to find the predictor $A$ that satisfies the relationship
$
G \approx A s
$
(note that we think of $G$ and $s$ as column vectors). To this end, we conduct the experiment $T$ times and collect the resulting column vectors as ${ \vec s_1, \vec s_2, \cdots, \vec s_T}$. We then define a matrix $\mathbf{S}$ as the concatenation of these column vectors
\begin{align}
\mathbf{S} := \begin{pmatrix}
s_1 |\, s_2 |\, \cdots | s_T
\end{pmatrix}
\end{align}
Similarly, we define the matrix $\mathbf{G}$ as
\begin{align}
\mathbf G := \begin{pmatrix}
G_1 | G_2|\, \cdots \, | G_T
\end{pmatrix}.
\end{align}
Finally, we choose $A$ by applying Tikhonov regularization \cite{shalev2014understanding}, which results in the following choice of $A$
\begin{align}\label{eq:W}
A = \mathbf G \mathbf{S}^T (\mathbf{S}\mathbf{S}^T + \gamma \mathbb{I})^{-1}
\end{align}
with $\gamma$ and $\mathbb I$ being a regularization parameter, and the identity matrix, respectively. One should note that $\mathbf G$, and $\mathbf S$ written in \eqref{eq:W} correspond to $T$ training samples that we take. The matrix $A$ evaluated above is then used for the prediction of the test data. \cref{fig:Schem} provides a schematic representation of the reservoir training.
Overall, to predict time series, the reservoir is initially trained to determine $A$ by using equation \eqref{eq:W}. After training, a set of initial values outside of the training data is inputted into the oscillator. The oscillator uses $A$ to predict future values, which are then used as initial values for further predictions.
| 3,442 | 11,064 |
en
|
train
|
0.4980.1
|
\section{Results}\label{sec:res}
This section investigates the performance of a single Kerr non-linear oscillator trained on the MG chaotic time series, as well as the effect of different quantumness measures on the performance of the reservoir. Specifically, in Section \cref{sec:pts}, we discuss how well the non-linear oscillator could learn when trained on chaotic time series, while in Section \cref{sec:q}, we examine the impact of various quantumness measures. Lastly, we outline further investigations, including the effects of noise.
To simulate quantum dynamics in the Fock space, we truncate every operator in the number basis, making them $d_{\text{t}}$-dimensional. Notably, the simulation results in this work use a dimension $d_{\text{t}}\geq 20$, which is sufficiently large as most of the states considered have a significant overlap with the subspace spanned by the number states $\ket n$ for $n\leq10$ (For instance, we use the coherent state $\ket\alpha$ with $\alpha=1+\iota$, the overlap of which with the first $20$ Fock states is larger than $1-6.5\times 10^{-15}$).
\subsection{Learning Time Series}\label{sec:pts}
In what follows we report the results obtained by training our single non-linear oscillator.
Here, we consider the prediction the chaotic MG series. MG is formally defined as the solution to the following delay differential equation
\begin{align}
\dot{x}(t) = \beta \frac{x(t-\tau)}{1+x(t-\tau)^n} - \gamma x(t).
\end{align}
We use the parameters $\beta = 0.2$, $\gamma = 0.1$, $n=10$, and $\tau = 17$ throughout this work. The performance of the trained reservoir on the test data is presented in \cref{fig:MG1}. \cref{fig:MG2} shows the delayed embedding of the predicted MG, which is compared to the actual diagram. One can readily observe that this model is successful in learning MG.
\subsection{Quantumness}\label{sec:q}
In this section, we introduce our quantumness measure and study the effect of quantumness on that basis on the accuracy of the learning model.
\begin{figure}
\caption{
Average quantumness ($Q$) during the evolution is shown to be decreasing as $\kappa$ (the photon loss rate) increases. The states used as initial states are the mixed state (labeled as `mix') being proportional to $\ket{\alpha}
\label{fig:Wigs}
\end{figure}
\subsubsection{Quantumness Measures}\label{sec:quant}
Our main goal here is to determine if there is a correlation between the quantumness of the system and the accuracy of the learning process. An affirmative answer would be a quantum advantage for this computational model. To this end, we need to quantify the quantumness of a state in Fock space. We point out that there has been extensive research done on the quantification of quantumness \cite{ groisman2007quantumness, ollivier2001quantum, takahashi1986wigner}.
Furthermore, there has been a line of research in the study of the macroscopicity of quantum states and their effective size \cite{frowis2018macroscopic, nimmrichter2013macroscopicity, leggett1985quantum, leggett2016note}. One such measure in the Fock space is defined by
Lee and Jeong \cite{lee2011quantification} as follows:
\begin{equation}
\begin{split}
I(\hat\rho) := \pi \bigg( &\int_{x,p} \big(\partial_xW(x,p)\big)^2\\ &+\big(\partial_pW(x,p)\big)^2 - 2W(x,p)^2 \bigg)
\end{split}
\end{equation}
where $x,p$ are dimensionless space and momentum components (in such scale, the uncertainty principle becomes $\Delta x\, \Delta p \geq 1/2$). This formulation can also be found in \cite{I}. The following identities are pointed out in \cite{lee2011quantification}:
\begin{itemize}
\item $I(\ket{\alpha} \bra{\alpha}) = 0$, for any coherent state $\ket\alpha$.
\item $\forall n \in \mathbb{N}: I(\sum_{i=0}^{n-1} \frac{1}{n}\, \ket{i}\bra{i}) = 0$, where $\ket{i}$ are the Fock states (i.e., the eigen-vectors of the number operator).
\end{itemize}
It is worth mentioning that this measure can obtain negative values \cite{I}. Intuitively, one could think of $I(\hat\rho)$ as the fineness of the Wigner function associated with $\hat\rho$. The aforementioned results may suggest that the positiveness of $I$ indicates non-classicality, as the coherent state and a diagonal density matrix in the Fock basis are considered as classical states. In the following theorem, we prove that if $I(\hat\rho) >0$, then $\rho$ cannot be written as a mixture of coherent states, hence being non-classical.
\begin{thm}\label{thm}
If $\hat\rho$ is a mixture of coherent states, then $I(\hat\rho)\leq 0$.
\end{thm}
The proof is provided in \cref{PfOfThm}.
One should also note that $I$ is computable in a much shorter time as it reformulated as (\cite{lee2011quantification})
\begin{align}
I(\hat\rho) = \tr\left(\hat\rho \mathcal{D}_a(\hat\rho)\right).
\end{align}
On the other hand, the computation of Wigner negativity with the current algorithms is costly, as it requires the computation of the entire Wigner function.
We hence use the following quantumness measure $Q$
\begin{align}
Q(\hat\rho) = \begin{cases}
I(\hat\rho) \, &\text{if } I(\hat\rho) >0,\\
0 & \text{o.w.}
\end{cases}
\end{align}
We make this choice as we do not want our quantumness measure to obtain negative values. We observe that this measure is consistent with some intuitions regarding the quantumness of reservoir computing, which have been previously used in \cite{PhysRevResearch.3.013077}. In particular, the intuition that by increasing $\kappa$ we should reach a classical limit, which is illustrated in \cref{fig:Wigs}. Finally, we point out the fact that the quantumness of the state changes during evolution. This is indeed observed in \cref{fig:QuantKappa}.
\begin{figure}
\caption{Quantumness and Wigner plots of the reservoir's state evolution. The oscillator parameters are $(\alpha, \kappa, K) = (1.2, 0.1, 0.05)$.}
\label{fig:QuantKappa}
\end{figure}
\subsubsection{Example States}
To study the quantumness effect, here, we used different initial states. For instance, we considered training the reservoir initialized at different states including the cat state (i.e., normalized $\ket{\alpha} + \ket{-\alpha}$), the corresponding mixed state (i.e., normalized $\ket{\alpha}\bra{\alpha} + \ket{-\alpha}\bra{-\alpha}$), the coherent state $\ket \alpha$, and the number state $\ket n$. We further added the data obtained from the training of a classical model. We changed the training data sizes to better compare different states and models. The result is presented in \cref{fig:trainingCurve}. Despite not showing a significant advantage for states with high quantumness, this diagram reveals that the quantum model for the reservoir outperforms the classical model in the task of MG prediction.
\begin{figure}
\caption{Training curves for both classical and quantum models. Different initial states for the quantum model have been considered. Those are the mixed state (labeled as `mix') being proportional to $\ket{\alpha}
\label{fig:trainingCurve}
\end{figure}
\subsubsection{Random States}
Since the distinction between our examples in \cref{fig:trainingCurve} does not demonstrate a clear correlation between the quantumness and the performance, we pick random states and scrutinize the correlations between the training accuracy and quantumness measures. Interestingly, \cref{fig:hist} shows a clear correspondence between the quantumness and the performance; providing us with a piece of evidence that the quantumness does indeed help achieve more precise predictions. For the initial random states, we fix a dimension $d$, then pick a state according to the Haar random measure on $\mathcal H(\mathbb C^d)$ \cite{haar1933massbegriff}.
To this end, we use \cite[Proposition 7.17]{watrous2018theory}. In particular, we consider the set of $2d$ independent and identically distributed (i.i.d.) standard Gaussian random variables $X_1, Y_1, \cdots, X_d, Y_d$ to construct the state
\begin{align}\label{eq:Gaussian}
\ket \psi = \frac{\left(X_1 + \iota Y_1, \cdots, X_d + \iota Y_d\right)}{\sqrt{\sum_{i} X_i^2 + Y_i^2}}
\end{align}
which is a Haar random state in $\mathcal H(\mathbb C^d)$. To more elaborate, according to \cite[Proposition 7.17]{watrous2018theory}, the distribution of the states generated by \eqref{eq:Gaussian}, are invariant under the action of the unitaries acting on $\mathcal{H}(\mathbb C^d)$. We repeated this random state generation for $d=4$ to $d=10$, collecting 20 samples for each dimension, and truncated the Fock space by considering the subspace spanned by the first $25$ number states (i.e., $d_{\text{t}}=25$) to obtain the results presented in \cref{fig:MainA}. We emphasize that the same training protocol is applied to all such states in every dimension.
\begin{figure*}
\caption{Training accuracy for the task of Mackey-Glass prediction, using $140$ random states ($20$ datapoints for each $d = 4, \cdots, 10$). We elaborate on the random selection process at the end of section \ref{sec:q}
\label{fig:MainA}
\label{fig:hist}
\label{fig:Main}
\end{figure*}
The correlation coefficients of the data show a relationship between the quantumness and the training accuracy. We further investigated the effects of quantumness using the t-tests. Our hypothesis test (\cref{fig:hist})
resulted in a $p$-value of $2.1\times 10^{-4}$. As a reminder, the $p$-value in our case of study is an estimation of a lower bound on the probability of observing the obtained results under the hypothesis that the quantumness has no positive effect on the training accuracy. Thus, such a small $p$-value confirms the efficacy of quantumness on the training outcome. We further investigate the relation of dimensionality with the error. We note that increasing the dimension used for the oscillator could be understood as increasing the complexity and the power of the model. From \cref{fig:MainA} we observe that although dimensionality correlates well with quantumness, it does not correlate much with accuracy. This suggests that quantumness is a more effective factor than complexity.
A similar analysis is presented in \cref{app:Wigner}, where we consider the Wigner negativity as our quantumness measure. Other than that, we also investigate the robustness of the model by introducing a variety of noises to the evolution. Our results, presented in \cref{app:noise}, suggest that the model can tolerate a considerable amount of noise.
\section{Discussion}\label{sec:discussion}
In this study, we focused on evaluating the effectiveness of a quantum non-linear oscillator in making time-series predictions and examining how quantumness impacts the quantum learning model. We utilize two quantumness metrics, namely the Wigner negativity and the Lee-Jeong measure \cite{lee2011quantification}. The former is a widely accepted measure, while the latter was initially introduced to measure macroscopicity and in this work is shown to be a quantumness measure as well. Through our methodologies, we discovered that quantumness has a stronger correlation with performance than the dimensionality of the reservoir's state. Overall, our findings contribute to a deeper understanding of the role of quantumness in continuous-variable reservoir computing and highlight its potential for enhancing the performance of this computational model.
Our work raises a number of important questions. Firstly, we aim to determine what specific structures within a reservoir computing model will lead to quantum speed-ups. Additionally, one can investigate the impact of quantumness on a network of oscillators in future research. Notably, when dealing with a network of continuous variable oscillators, entanglement as a measure of quantumness could also be examined. It is worth mentioning that our method has potential for implementation on actual quantum hardware, and may even be feasible with current limited devices due to the strong Kerr non-linearity present in models for a transmon superconducting qubit \cite{bertet2012circuit}.
\section{Code Availability}
The codes used for the generation of the plots of this manuscript are publicly available at \href{https://github.com/arsalan-motamedi/QRC}{https://github.com/arsalan-motamedi/QRC}.
\section{Contributions}
All authors contributed extensively to the presented work. H.Z-H. and C.S. conceived the original ideas and supervised the project. A.M. performed analytical studies and numerical simulations and generated different versions of the manuscript. H.Z-H. and C.S. verified the calculations, provided detailed feedback on the manuscript, and applied many insightful updates.
\begin{appendices}
\crefalias{section}{appendix}
\section{Comparisons with Wigner negativity}\label{app:Wigner}
In this appendix, we examine the quantumness of the system by Wigner negativity. We recall that the Wigner negativity is the volume under the $W=0$ plane for a Wigner function, and is often considered as a quantumness measure in the literature \cite{hudson1974wigner, kenfack2004negativity}. The computation of Wigner negativity is more costly than the Lee-Jeong measure introduced in \cref{sec:quant}. Hence, we only compute it for the initial state. The result of this study is presented in \cref{fig:WigNeg}. As we observe, there is a strong correlation between the two quantumness measures. Furthermore, the Wigner negativity correlates well with the test error of the experiment.
\begin{figure}
\caption{The effects of Wigner negativity on the training performance. As observed, we get a correlation between Wigner negativity and test error, which again highlights the effect of quantumness. Moreover, we observe that there is a strong correlation between the quantumness measures considered in this work.}
\label{fig:WigNeg}
\end{figure}
| 3,651 | 11,064 |
en
|
train
|
0.4980.2
|
\section{Discussion}\label{sec:discussion}
In this study, we focused on evaluating the effectiveness of a quantum non-linear oscillator in making time-series predictions and examining how quantumness impacts the quantum learning model. We utilize two quantumness metrics, namely the Wigner negativity and the Lee-Jeong measure \cite{lee2011quantification}. The former is a widely accepted measure, while the latter was initially introduced to measure macroscopicity and in this work is shown to be a quantumness measure as well. Through our methodologies, we discovered that quantumness has a stronger correlation with performance than the dimensionality of the reservoir's state. Overall, our findings contribute to a deeper understanding of the role of quantumness in continuous-variable reservoir computing and highlight its potential for enhancing the performance of this computational model.
Our work raises a number of important questions. Firstly, we aim to determine what specific structures within a reservoir computing model will lead to quantum speed-ups. Additionally, one can investigate the impact of quantumness on a network of oscillators in future research. Notably, when dealing with a network of continuous variable oscillators, entanglement as a measure of quantumness could also be examined. It is worth mentioning that our method has potential for implementation on actual quantum hardware, and may even be feasible with current limited devices due to the strong Kerr non-linearity present in models for a transmon superconducting qubit \cite{bertet2012circuit}.
\section{Code Availability}
The codes used for the generation of the plots of this manuscript are publicly available at \href{https://github.com/arsalan-motamedi/QRC}{https://github.com/arsalan-motamedi/QRC}.
\section{Contributions}
All authors contributed extensively to the presented work. H.Z-H. and C.S. conceived the original ideas and supervised the project. A.M. performed analytical studies and numerical simulations and generated different versions of the manuscript. H.Z-H. and C.S. verified the calculations, provided detailed feedback on the manuscript, and applied many insightful updates.
\begin{appendices}
\crefalias{section}{appendix}
\section{Comparisons with Wigner negativity}\label{app:Wigner}
In this appendix, we examine the quantumness of the system by Wigner negativity. We recall that the Wigner negativity is the volume under the $W=0$ plane for a Wigner function, and is often considered as a quantumness measure in the literature \cite{hudson1974wigner, kenfack2004negativity}. The computation of Wigner negativity is more costly than the Lee-Jeong measure introduced in \cref{sec:quant}. Hence, we only compute it for the initial state. The result of this study is presented in \cref{fig:WigNeg}. As we observe, there is a strong correlation between the two quantumness measures. Furthermore, the Wigner negativity correlates well with the test error of the experiment.
\begin{figure}
\caption{The effects of Wigner negativity on the training performance. As observed, we get a correlation between Wigner negativity and test error, which again highlights the effect of quantumness. Moreover, we observe that there is a strong correlation between the quantumness measures considered in this work.}
\label{fig:WigNeg}
\end{figure}
\section{Noise}\label{app:noise}
Noise is an inevitable factor in quantum devices, and it is of profound importance for a quantum computing approach to be robust to noise. We show the robustness of this approach by considering different noise models as explained below.
\begin{figure}
\caption{Noisy reservoir learning MG. The MG training process is performed when there are a variety of noises applied to the reservoir.}
\label{fig:noisepred}
\label{fig:noisephase}
\label{fig:NoisyMG}
\end{figure}
\textit{Dephasing, Pumping, and the White noise.} We introduce a dephasing error by considering the Lindbladian operators $L_n = \lambda \ket{n}\bra{n}$. As observed in the experiments, this will not affect the results as much. Furthermore, the incoherent pumping error is simulated by considering the Lindbladian corresponding to $\lambda a^\dagger$. On top of those, we add white noise to the input of the reservoir. This is performed via changing the equations of evolution \eqref{eq:ev-2} through the substitution $f(t) \rightarrow f(t) + \lambda' n(t)$. Here, $n(t)$ is the white noise of unit power, and $\lambda'$ controls its strength. \cref{fig:NoisyMG} shows the performance of the reservoir's output under both incoherent pumping, dephasing error, and white noise (see \cref{eq:ev-2}). We have set $(K,\kappa, \alpha, \lambda, \lambda')= (0.05, 0.15, 1.2, 0.05, 0.02)$.
It is worth mentioning that noise in the context of reservoir computing is shown to be useful in certain cases
\cite{noise, fry2023optimizing}.
\section{Proof of Theorem \ref{thm}}\label{PfOfThm}
Recall the Gaussian integrals, which we will use at the end
\begin{equation}\label{eq:GI}
\begin{split}
\int_{\xi\in\mathbb{R}} e^{-\frac{\xi^2}{a^2}} \, d\xi &= \sqrt{\pi\, a^2},\\
\int_{\xi\in\mathbb{R}} \xi^2 \, e^{-\frac{\xi^2}{a^2}}\, d\xi &= \frac{a^2}{2}\, \sqrt{\pi\, a^2}.
\end{split}
\end{equation}
Note that for any coherent state $\ket{\alpha}$, one has
\begin{align*}
W_{\alpha} = \frac{1}{\pi}\, e^{- \big[ (x-\text{Re}(\alpha))^2 + (p-\text{Im}(\alpha))^2 \big]}
\end{align*}
Let us consider a set of $K$ coherent states, namely $\{\rho_{i} = \ket{\alpha_i}\bra{\alpha_i}: i=1,2,\cdots,K \}$, and define $x_i : = \text{Re}(\alpha_i)$, $p_i := \text{Im}(\alpha_i)$. Also let $(q_i)_{i\in [K]}$ be a probability distribution over $K$ objects. We can then consider the mixture of coherent states as
\begin{align*}
\rho = \sum_{i=1}^K q_i\, \ket{\alpha_i}\bra{\alpha_i}
\end{align*}
since the Wigner function is linear with respect to the density matrix, one has
\begin{align*}
W_{\rho}(x,p) = \sum_{i=1}^K q_i\, W_{\alpha_i}(x,p)
\end{align*}
Hence, we get
\begin{align*}
\frac{1}{\pi}\, I(\rho) &= \int_{x,p} \big(\sum_{i\in[K]} \, q_i\, \partial_xW_i(x,p)\big)^2 \notag \\
& \quad \quad \quad + \big(\sum_{i\in[K]} \, q_i\, \partial_pW_i(x,p)\big)^2 \notag \\
& \quad \quad \quad - 2 \big(\sum_{i\in[K]} \, q_i\, W_i(x,p)\big)^2 \\
&= \sum_{i,j\in[k]} \, q_i\, q_j\, \int_{x,p}\bigg( \partial_{x} W_i\, \partial_{x} W_j\notag \\
&\qquad \quad+ \partial_{p} W_i\, \partial_{p} W_j - 2W_i\, W_j \bigg)
\end{align*}
note that the terms in the summation above with $i=j$ could be rewritten as $q_i^2\, I(\rho_i) = 0$ since any cohrent state $\rho_i$ has zero quantumness i.e., $I(\rho_i)=0$. Furthermore, \cref{claim1} below guarantees that for any choice of $i,j$ the expression in the parenthesis is non-positive, and hence, the proof is complete.
\begin{claim}\label{claim1}
For any two coherent states, say $\ket{\alpha_0}$ and $\ket{\alpha_1}$, both of the following inequalities hold
\begin{equation}
\begin{split}
\int_{x,p} \bigg(\partial_{x} W_0\, \partial_{x} W_1 - W_0\, W_1\bigg) &\leq 0\\
\int_{x,p} \bigg(\partial_{p} W_0\, \partial_{p} W_1 - W_0\, W_1\bigg) &\leq 0
\end{split}
\end{equation}
\end{claim}
\begin{proof}
One has
\begin{align*}
W_0 = \frac{1}{\pi}\, e^{- \big[ (x-x_0)^2 + (p-p_0)^2 \big]}
\end{align*}
hence
\begin{equation*}
\begin{split}
\partial_x W_0 &= -2 \frac{(x-x_0)}{\pi}\, e^{- \big[ (x-x_0)^2 + (p-p_0)^2 \big]},\\ \partial_p W_0 &= -2 \frac{(p-p_0)}{\pi}\, e^{- \big[ (x-x_0)^2 + (p-p_0)^2 \big]}
\end{split}
\end{equation*}
and similar expressions for $W_1$ and its derivatives. Let us now prove the first inequality. Define
\begin{align*}
\mathcal A := \int_{x,p} \bigg(\partial_{x} W_0\, \partial_{x} W_1 - W_0\, W_1\bigg)
\end{align*}
then, by direct substitution one gets
\begin{align*}
\mathcal A = \frac{1}{\pi^2}\, &\int_{x,p}\, \big( 4(x-x_0)(x-x_1) - 1 \big) \notag \\
& \times e^{- \big[ (x-x_0)^2 + (x-x_1)^2 + (p-p_0)^2 + (p-p_1)^2 \big] }
\end{align*}
we may now use the elementary identities
$
(x-x_0)(x-x_1) =\left(x - \frac{x_0+x_1}{2}\right)^2 - \left(\frac{x_0 - x_1}{2} \right)^2$ and $
(x-x_0)^2 + (x-x_1)^2 =\left(x - \frac{x_0+x_1}{2}\right)^2 + \left(\frac{x_0 - x_1}{2} \right)^2
$
and further, letting $\Delta x := x_0 - x_1$, $\Delta p = p_0-p_1$, and $\overline{x} := \frac{x_0 + x_1}{2}$, and $\overline{p}= \frac{p_0+p_1}{2}$ to conclude
\begin{align*}
\mathcal A &= \frac{e^{-\frac{1}{2} ( \Delta x^2 + \Delta p^2) }}{\pi^2}\, \notag \\
& \quad \quad \times \int_{x,p} \big[ 4\big(x - \overline{x}\big)^2 - (\Delta x)^2 - 1 \big] \\
&\quad \qquad e^{ -2(x - \overline{x})^2 -2(p - \overline{p})^2 }\\
&= -\frac{e^{-\frac{1}{2} ( \Delta x^2 + \Delta p^2)}}{2\pi}\, (\Delta x)^2 \leq 0
\end{align*}
(for the last equality, we use the Gaussian integrals in \cref{eq:GI}). A similar argument gives the second inequality of the claim.
\end{proof}
\end{appendices}
\pagebreak
\widetext
\begin{center}
\textbf{\large Supplementary Material for: Correlations Between Quantumness and Learning Performance in Reservoir Computing with a Single Oscillator}
\end{center}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
\setcounter{page}{1}
\setcounter{section}{0}
\makeatletter
\renewcommand{S\arabic{equation}}{S\arabic{equation}}
\renewcommand{S\arabic{figure}}{S\arabic{figure}}
\renewcommand{\bibnumfmt}[1]{[S#1]}
\renewcommand{\citenumfont}[1]{S#1}
\section{R\"ossler Attractor}\label{sec:chaos}
This section provides the performance in the training of R\"ossler attractor. The R\"ossler attractor \cite{rossler1976equation, rossler1979equation} is a three-dimensional motion following the dynamics
\begin{equation}
\begin{cases}
\frac{dx}{dt} &= -y-z\\
\frac{dy}{dt} &= x + ay\\
\frac{dz}{dt} &= b+ z(x-c)
\end{cases}
\end{equation}
where $(a,b,c)\in\mathbb R^3$ are constant parameters, which we set to $(0.2,0.2,5.7)$ in our experiment. \cref{fig:Ros1} and \cref{fig:Ros2} show the model's learning results for this chaotic time series. We highlight that each component of the oscillator is learned independently from the others.
\begin{figure}
\caption{R\"ossler attractor training with a quantum oscillator.}
\label{fig:Ros1}
\label{fig:Ros2}
\label{fig:Rossler}
\end{figure}
\section{More on Noise}
In this section, we investigate the effect of adding white noise on the input to the reservoir. This error is introduced by the change equations of evolution through the substitution $f(t) \rightarrow f(t) + \lambda n(t)$. Here, $n(t)$ is the white noise of unit power, and $\lambda$ controls its strength. Firstly, we consider a white noise applied to the input to the reservoir. \cref{fig:ArbF} shows the outcome of learning noisy period functions. Despite significant signal distortion caused by noise, the oscillator demonstrates the ability to learn the underlying periodic functions. We further investigated the effect of training sawtooth signal with different noise levels, which resulted in \cref{fig:stn}. A similar experiment, this time with the MG, resulted in a training error of $0.053$. Noting that the training error in the noiseless case results in an error of $0.047$, we conclude that the model is robust to this noise model for a variety of prediction tasks. We made the choice of parameters $\alpha = 0.1$, $\kappa = K = 0.05$ in obtaining the results.
\begin{figure}
\caption{Test error of training noisy sawtooth function. The initial states are the cat state and its classical mixture i.e., the normalized $\ket{\alpha}
\label{fig:stn}
\end{figure}
\begin{figure}
\caption{Learning noisy periodic functions. The input to the reservoir is contaminated by white noise. However, the reservoir is still able to learn the input signal.}
\label{fig:ArbF}
\end{figure}
\section{Evolution Animations}
Animations showing the evolution of the Wigner function throughout the process are prepared and made available online at \href{https://github.com/arsalan-motamedi/QRC/tree/main/EvolutionAnimations}{https://github.com/arsalan-motamedi/QRC/tree/main/EvolutionAnimations}.
\end{document}
| 3,971 | 11,064 |
en
|
train
|
0.4981.0
|
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{document}
\title{Well-rounded zeta-function of planar arithmetic lattices}
\operatorname{Aut}hor{Lenny Fukshansky}\thanks{The author was partially supported by a grant from the Simons Foundation (\#208969 to Lenny Fukshansky) and by the NSA Young Investigator Grant \#1210223.}
\address{Department of Mathematics, 850 Columbia Avenue, Claremont McKenna College, Claremont, CA 91711}
\email{[email protected]}
{\mathcal S}ubjclass[2010]{11H06, 11H55, 11M41, 11E45}
\keywords{arithmetic lattices, integral lattices, well-rounded lattices, Dirichlet series, zeta-functions}
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{abstract}
We investigate the properties of the zeta-function of well-rounded sublattices of a fixed arithmetic lattice in the plane. In particular, we show that this function has abscissa of convergence at $s=1$ with a real pole of order 2, improving upon a result of \cite{kuehnlein}. We use this result to show that the number of well-rounded sublattices of a planar arithmetic lattice of index less or equal $N$ is $O(N \log N)$ as $N \to \infty$. To obtain these results, we produce a description of integral well-rounded sublattices of a fixed planar integral well-rounded lattice and investigate convergence properties of a zeta-function of similarity classes of such lattices, building on the results of \cite{fletcher_jones}.
\end{abstract}
\maketitle
\def{\mathcal A}{{\mathcal A}}
\def{\mathcal A}A{{\mathfrak A}}
\def{\mathcal B}{{\mathcal B}}
\def{\mathcal C}{{\mathcal C}}
\def{\mathcal D}{{\mathcal D}}
\def{\mathfrak E}{{\mathfrak E}}
\def{\mathcal F}{{\mathcal F}}
\def{\mathcal H}{{\mathcal H}}
\def{\mathcal I}{{\mathcal I}}
\def{\mathcal I}I{{\mathfrak I}}
\def{\mathcal J}{{\mathcal J}}
\def{\mathcal K}{{\mathcal K}}
\def{\mathfrak K}{{\mathfrak K}}
\def{\mathcal L}{{\mathcal L}}
\def{\mathcal L}L{{\mathfrak L}}
\def{\mathcal M}{{\mathcal M}}
\def{\mathfrak m}{{\mathfrak m}}
\def{\mathcal M}M{{\mathfrak M}}
\def{\mathcal N}{{\mathcal N}}
\def{\mathcal O}{{\mathcal O}}
\def{\mathcal O}O{{\mathfrak O}}
\def{\mathfrak P}{{\mathfrak P}}
\def{\mathcal R}{{\mathcal R}}
\def{\mathcal P_N({\mathbb R})}{{\mathcal P_N({\mathbb R})}}
\def{\mathcal P^M_N({\mathbb R})}{{\mathcal P^M_N({\mathbb R})}}
\def{\mathcal P^d_N({\mathbb R})}{{\mathcal P^d_N({\mathbb R})}}
\def{\mathcal S}{{\mathcal S}}
\def{\mathcal V}{{\mathcal V}}
\def{\mathcal X}{{\mathcal X}}
\def{\mathcal Y}{{\mathcal Y}}
\def{\mathcal Z}{{\mathcal Z}}
\def{\mathcal H}{{\mathcal H}}
\def{\mathbb C}{{\mathbb C}}
\def{\mathcal N}n{{\mathbb N}}
\def{\mathbb P}{{\mathbb P}}
\def{\mathbb Q}{{\mathbb Q}}
\def{\mathbb Q}{{\mathbb Q}}
\def{\mathbb R}{{\mathbb R}}
\def{\mathcal R}R{{\mathbb R}}
\def{\mathbb Z}{{\mathbb Z}}
\def{\mathbb Z}{{\mathbb Z}}
\def{\mathbb A}{{\mathbb A}}
\def{\mathbb F}{{\mathbb F}}
\def{\mathcal H}Delta{{\it {\mathcal D}elta}}
\def{\mathfrak K}{{\mathfrak K}}
\def{\overline{\mathbb Q}}{{\overline{\mathbb Q}}}
\def{\overline{K}}{{\overline{K}}}
\def{\overline{Y}}{{\overline{Y}}}
\def{\mathfrak K}bar{{\overline{\mathfrak K}}}
\def{\overline{U}}{{\overline{U}}}
\def{\varepsilon}{{\varepsilon}}
\def{{\tfrac12}at \alpha}{{{\tfrac12}at \alpha}}
\def{{\tfrac12}at {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}ta}{{{\tfrac12}at {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}ta}}
\def{\tilde \gamma}{{\tilde \gamma}}
\def{\tfrac12}{{\tfrac12}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}i{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e_i}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol c}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol c}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol m}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol m}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol k}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol k}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol i}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol i}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol l}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol l}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol q}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol q}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol u}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol u}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol t}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol t}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol s}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol s}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol v}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol v}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol w}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol w}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol X}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol X}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol z}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol z}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol w}y{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol y}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol Y}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol Y}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol L}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol L}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol a}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol a}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol b}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol b}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}t{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol\eta}}
\def{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}i{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol{\mathcal H}i}}
\def{{\boldsymbol 1}_Ldsymbol 0}{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol 0}}
\def{{\boldsymbol 1}_Ldsymbol 0}ne{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol 1}}
\def{{\boldsymbol 1}_Ldsymbol 0}l{{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol 1}_L}
\def\varepsilon{\varepsilon}
\def\boldsymbol\varphi{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol\varphi}
\def\boldsymbol\psi{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol\boldsymbol\varphisi}
\def\operatorname{rank}{\operatorname{rank}}
\def\operatorname{Aut}{\operatorname{Aut}}
\def\operatorname{lcm}{\operatorname{lcm}}
\def{\mathcal S}gn{\operatorname{sgn}}
\def{\mathcal S}pn{\operatorname{span}}
\def\operatorname{mod}{\operatorname{mod}}
\def{\mathcal N}orm{\operatorname{Norm}}
\def\operatorname{dim}{\operatorname{dim}}
\def\operatorname{det}{\operatorname{det}}
\def{\mathcal V}ol{\operatorname{Vol}}
\def\operatorname{rk}{\operatorname{rk}}
\def\operatorname{ord}{\operatorname{ord}}
\def\operatorname{ker}{\operatorname{ker}}
\def\operatorname{div}{\operatorname{div}}
\def\operatorname{Gal}{\operatorname{Gal}}
\def\operatorname{GL}{\operatorname{GL}}
\def\operatorname{SL}{\operatorname{SL}}
\def\operatorname{SNR}{\operatorname{SNR}}
\def\operatorname{WR}{\operatorname{WR}}
\def{\mathcal I}WR{\operatorname{IWR}}
\def{\mathcal S}cg{\operatorname{\left< \Gamma \right>}}
\def{\mathcal S}wrh{\operatorname{Sim_{WR}({\mathcal L}ambda_h)}}
\def\operatorname{C_h}{\operatorname{C_h}}
\def\operatorname{C_h}t{\operatorname{C_h(\theta)}}
\def{\mathcal S}cgt{\operatorname{\left< \Gamma_{\theta} \right>}}
\def{\mathcal S}cgmn{\operatorname{\left< \Gamma_{m,n} \right>}}
\def\operatorname{\Omega_{\theta}}{\operatorname{{\mathcal O}mega_{\theta}}}
\def\operatorname{mn}{\operatorname{mn}}
\def\operatorname{disc}{\operatorname{disc}}
\def{\mathcal R}e{\operatorname{Re}}
\def\operatorname{lcm}{\operatorname{lcm}}
{\mathcal S}ection{Introduction}
\label{intro}
Let ${\mathcal L}ambda = A{\mathbb Z}^2 {\mathcal S}ubset {\mathbb R}^2$ be a lattice of full rank in the plane, where $A=({{{\boldsymbol 1}_Ldsymbol 0}ldsymbol a}_1 {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol a}_2)$ is a basis matrix. The corresponding norm form is defined as
$$Q_A({{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}) = {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}^t A^t A {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}.$$
We say that ${\mathcal L}ambda$ is {\it arithmetic} if the entries of the matrix $A^tA$ generate a 1-dimensional ${\mathbb Q}$-vector subspace of ${\mathbb R}$. This property is easily seen to be independent of the choice of a basis. We define $\operatorname{det}({\mathcal L}ambda)$ to be $|\operatorname{det}(A)|$, again independent of the basis choice, and (squared) {\it minimum} or {\it minimal norm}
$$|{\mathcal L}ambda| = \min \{ \|{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}\|^2 : {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x} \in {\mathcal L}ambda {\mathcal S}etminus \{{{\boldsymbol 1}_Ldsymbol 0}\} \} = \min \{ Q_A({{{\boldsymbol 1}_Ldsymbol 0}ldsymbol w}y) : {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol w}y \in {\mathbb Z}^2 {\mathcal S}etminus \{{{\boldsymbol 1}_Ldsymbol 0}\} \},$$
where $\|\ \|$ stands for the usual Euclidean norm. Then each ${{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x} \in {\mathcal L}ambda$ such that $\|{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}\|^2 = |{\mathcal L}ambda|$ is called a {\it minimal vector}, and the set of minimal vectors of ${\mathcal L}ambda$ is denoted by $S({\mathcal L}ambda)$. A planar lattice ${\mathcal L}ambda$ is called {\it well-rounded} (abbreviated WR) if the set $S({\mathcal L}ambda)$ contains a basis for ${\mathcal L}ambda$; we will refer to such a basis as a {\it minimal basis} for~${\mathcal L}ambda$.
While in this note we focus on the planar case, the notion of WR lattices is defined in every dimension: a full-rank lattice in ${\mathbb R}^N$ is WR if it contains $N$ linearly independent minimal vectors -- the fact that these form a basis for the lattice is a low-dimensional phenomenon, only valid for $N \leq 4$. WR lattices are important in discrete optimization, in particular in the investigation of sphere packing, sphere covering, and kissing number problems \cite{martinet}, as well as in coding theory \cite{esm}. Properties of WR lattices have also been investigated in \cite{mcmullen} in connection with Minkowski's conjecture and in \cite{lf:robins} in connection with the linear Diophantine problem of Frobenius. Furthermore, WR lattices are used in cohomology computations of $\operatorname{SL}_N({\mathbb Z})$ and its subgroups \cite{ash}. These considerations motivate the study of distribution properties of WR lattices. Distribution of WR lattices in the plane has been studied in \cite{wr1}, \cite{wr2}, \cite{wr3}, \cite{fletcher_jones}, \cite{kuehnlein}. In particular, these papers investigate various aspects of distribution properties of WR sublattices of a fixed planar lattice.
An important equivalence relation on lattices is geometric similarity: two lattices ${\mathcal L}ambda_1, {\mathcal L}ambda_2 {\mathcal S}ubset {\mathbb R}^2$ are called {\it similar}, denoted ${\mathcal L}ambda_1 {\mathcal S}im {\mathcal L}ambda_2$, if there exists $\alpha \in {\mathbb R}_{>0}$ and $U \in O_2({\mathbb R})$ such that ${\mathcal L}ambda_2 = \alpha U {\mathcal L}ambda_1$. It is easy to see that similar lattices have the same algebraic structure, i.e., for every sublattice $\Gamma_1$ of a fixed index in ${\mathcal L}ambda_1$ there is a sublattice $\Gamma_2$ of the same index in ${\mathcal L}ambda_2$ so that $\Gamma_1 {\mathcal S}im \Gamma_2$. A WR lattice can only be similar to another WR lattice, so it makes sense to speak of WR similarity classes of lattices. In \cite{kuehnlein} it has been proved that a planar lattice contains infinitely many non-similar WR sublattices if and only if it contains one. This is always the case for arithmetic planar lattices. If the lattice in question is not arithmetic, it may still have infinitely many non-similar WR sublattices depending on the value of a certain invariant described in \cite{kuehnlein}. In any case, it appears that non-arithmetic planar lattices contain fewer WR sublattices than arithmetic ones in the sense which we discuss below.
| 3,897 | 26,309 |
en
|
train
|
0.4981.1
|
{\mathcal S}ection{Introduction}
\label{intro}
Let ${\mathcal L}ambda = A{\mathbb Z}^2 {\mathcal S}ubset {\mathbb R}^2$ be a lattice of full rank in the plane, where $A=({{{\boldsymbol 1}_Ldsymbol 0}ldsymbol a}_1 {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol a}_2)$ is a basis matrix. The corresponding norm form is defined as
$$Q_A({{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}) = {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}^t A^t A {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}.$$
We say that ${\mathcal L}ambda$ is {\it arithmetic} if the entries of the matrix $A^tA$ generate a 1-dimensional ${\mathbb Q}$-vector subspace of ${\mathbb R}$. This property is easily seen to be independent of the choice of a basis. We define $\operatorname{det}({\mathcal L}ambda)$ to be $|\operatorname{det}(A)|$, again independent of the basis choice, and (squared) {\it minimum} or {\it minimal norm}
$$|{\mathcal L}ambda| = \min \{ \|{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}\|^2 : {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x} \in {\mathcal L}ambda {\mathcal S}etminus \{{{\boldsymbol 1}_Ldsymbol 0}\} \} = \min \{ Q_A({{{\boldsymbol 1}_Ldsymbol 0}ldsymbol w}y) : {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol w}y \in {\mathbb Z}^2 {\mathcal S}etminus \{{{\boldsymbol 1}_Ldsymbol 0}\} \},$$
where $\|\ \|$ stands for the usual Euclidean norm. Then each ${{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x} \in {\mathcal L}ambda$ such that $\|{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}\|^2 = |{\mathcal L}ambda|$ is called a {\it minimal vector}, and the set of minimal vectors of ${\mathcal L}ambda$ is denoted by $S({\mathcal L}ambda)$. A planar lattice ${\mathcal L}ambda$ is called {\it well-rounded} (abbreviated WR) if the set $S({\mathcal L}ambda)$ contains a basis for ${\mathcal L}ambda$; we will refer to such a basis as a {\it minimal basis} for~${\mathcal L}ambda$.
While in this note we focus on the planar case, the notion of WR lattices is defined in every dimension: a full-rank lattice in ${\mathbb R}^N$ is WR if it contains $N$ linearly independent minimal vectors -- the fact that these form a basis for the lattice is a low-dimensional phenomenon, only valid for $N \leq 4$. WR lattices are important in discrete optimization, in particular in the investigation of sphere packing, sphere covering, and kissing number problems \cite{martinet}, as well as in coding theory \cite{esm}. Properties of WR lattices have also been investigated in \cite{mcmullen} in connection with Minkowski's conjecture and in \cite{lf:robins} in connection with the linear Diophantine problem of Frobenius. Furthermore, WR lattices are used in cohomology computations of $\operatorname{SL}_N({\mathbb Z})$ and its subgroups \cite{ash}. These considerations motivate the study of distribution properties of WR lattices. Distribution of WR lattices in the plane has been studied in \cite{wr1}, \cite{wr2}, \cite{wr3}, \cite{fletcher_jones}, \cite{kuehnlein}. In particular, these papers investigate various aspects of distribution properties of WR sublattices of a fixed planar lattice.
An important equivalence relation on lattices is geometric similarity: two lattices ${\mathcal L}ambda_1, {\mathcal L}ambda_2 {\mathcal S}ubset {\mathbb R}^2$ are called {\it similar}, denoted ${\mathcal L}ambda_1 {\mathcal S}im {\mathcal L}ambda_2$, if there exists $\alpha \in {\mathbb R}_{>0}$ and $U \in O_2({\mathbb R})$ such that ${\mathcal L}ambda_2 = \alpha U {\mathcal L}ambda_1$. It is easy to see that similar lattices have the same algebraic structure, i.e., for every sublattice $\Gamma_1$ of a fixed index in ${\mathcal L}ambda_1$ there is a sublattice $\Gamma_2$ of the same index in ${\mathcal L}ambda_2$ so that $\Gamma_1 {\mathcal S}im \Gamma_2$. A WR lattice can only be similar to another WR lattice, so it makes sense to speak of WR similarity classes of lattices. In \cite{kuehnlein} it has been proved that a planar lattice contains infinitely many non-similar WR sublattices if and only if it contains one. This is always the case for arithmetic planar lattices. If the lattice in question is not arithmetic, it may still have infinitely many non-similar WR sublattices depending on the value of a certain invariant described in \cite{kuehnlein}. In any case, it appears that non-arithmetic planar lattices contain fewer WR sublattices than arithmetic ones in the sense which we discuss below.
Given an infinite finitely generated group $G$, it is a much-studied problem to determine the asymptotic growth of $\# \left\{ H \leq G : \left| G:H \right| \leq N \right\}$, the number of subgroups of index no greater than $N$, as $N \to \infty$ (see \cite{lubot}). One approach that has been used by different authors with great success entails looking at the analytic properties of the corresponding Dirichlet-series generating function ${\mathcal S}um_{H \leq G} \left| G:H \right|^{-s}$ and then using some Tauberian theorem to deduce information about the rate of growth of partial sums of its coefficients (see \cite{sautoy}, as well as Chapter~15 of \cite{lubot}). In case $G$ is a free abelian group of rank 2, i.e. a planar lattice, this Dirichlet series allows to count sublattices of finite index, and is a particular instance of the Solomon zeta-function (see \cite{reiner}, \cite{solomon}). We will use a similar approach while restricting to just WR sublattices, which is a more delicate arithmetic problem.
Fix a planar lattice ${\mathcal O}mega$, and define the {\it zeta-function of WR sublattices} of ${\mathcal O}mega$ to be
$$\zeta_{\operatorname{WR}}({\mathcal O}mega,s) = {\mathcal S}um_{\operatorname{WR} {\mathcal L}ambda {\mathcal S}ubseteq {\mathcal O}mega} \frac{1}{\left| {\mathcal O}mega : {\mathcal L}ambda \right|^s} = {\mathcal S}um_{n=1}^{\infty} \frac{\# \{\operatorname{WR} {\mathcal L}ambda {\mathcal S}ubseteq {\mathcal O}mega : \left| {\mathcal O}mega : {\mathcal L}ambda \right| = n\}}{n^s}$$
for $s \in {\mathbb C}$. The rate of growth of coefficients of this function can be conveyed by studying its abscissa of convergence and behavior of the function near it. For brevity of notation, we will say that an arbitrary Dirichlet series $f(s) = {\mathcal S}um_{n=1}^{\infty} a_n n^{-s}$ has an {\it abscissa of convergence} with a {\it real pole of order $\mu$} at $s=\rho$ if $f(s)$ is absolutely convergent for ${\mathcal R}e(s) > \rho$, and for $s \in {\mathbb R}$
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{lim_def}
\lim_{s \to \rho^+} (s-\rho)^{\mu} {\mathcal S}um_{n=1}^{\infty} \frac{a_n}{n^s}
\end{equation}
exists and is nonzero. Notice that this notion does not imply existence of analytic continuation for $f(s)$, but is merely a statement about the rate of growth of the coefficients of $f(s)$, which is precisely what we require. For instance, in \cite{wr1} and \cite{wr2} it has been established that $\zeta_{\operatorname{WR}}({\mathbb Z}^2,s)$ has abscissa of convergence with a real pole of order 2 at $s=1$. Furthermore, it has been shown in \cite{kuehnlein} that if ${\mathcal O}mega$ is a non-arithmetic planar lattice containing WR sublattices, then $\zeta_{\operatorname{WR}}({\mathcal O}mega,s)$ has abscissa of convergence with a real pole of order 1 at $s=1$ (in fact, Lemma~3.3 of~\cite{kuehnlein} combined with Theorem~4 on p.~158 of~\cite{lang} imply the existence of analytic continuation of $\zeta_{\operatorname{WR}}({\mathcal O}mega,s)$ in this situation to ${\mathcal R}e(s) > 1-{\varepsilon}$ for some ${\varepsilon} > 0$ with a pole of order 1 at $s=1$). It is natural to expect that the situation for any arithmetic lattice is the same as it is for ${\mathbb Z}^2$; in fact, another result of \cite{kuehnlein} states that for any arithmetic lattice ${\mathcal O}mega$, $\zeta_{\operatorname{WR}}({\mathcal O}mega,s)$ has abscissa of convergence at $s=1$, and it is conjectured that it has a pole of order 2 at $s=1$. The main goal of the present paper is to prove the following result in this direction.
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{thm} \label{main} Let ${\mathcal O}mega$ be a planar arithmetic lattice. Then $\zeta_{\operatorname{WR}}({\mathcal O}mega,s)$ has abscissa of convergence with a real pole of order 2 at $s=1$ in the sense of \eqref{lim_def} above. Moreover,
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{growth_bnd_1}
\# \{\operatorname{WR}\ {\mathcal L}ambda {\mathcal S}ubseteq {\mathcal O}mega : \left| {\mathcal O}mega : {\mathcal L}ambda \right| \leq N\} = O(N \log N)
\end{equation}
as $N \to \infty$.
\end{thm}
{\mathcal S}mallskip
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{rem} \label{WR_growth} To compare, Theorem~4.20 of \cite{sautoy} combined with Lemma~3.3 (and the Corollary following it) of \cite{kuehnlein} imply that if ${\mathcal O}mega$ is a non-arithmetic planar lattice containing WR sublattices, then the right hand side of \eqref{growth_bnd_1} is equal to $O(N)$. It should be pointed out that by writing that a function of $N$ is equal to $O(N \log N)$ (respectively, $O(N)$) we mean here that it is asymptotically bounded from above and below by nonzero multiples of $N \log N$ (respectively,~$N$). On the other hand, it is a well known fact (outlined, for example, on p. 793 of \cite{sautoy}) that for any planar lattice~${\mathcal O}mega$,
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{growth_bnd_2}
\# \{ {\mathcal L}ambda {\mathcal S}ubseteq {\mathcal O}mega : \left| {\mathcal O}mega : {\mathcal L}ambda \right| \leq N\} {\mathcal S}im \left( \boldsymbol\varphii^2/12 \right) N^2
\end{equation}
as $N \to \infty$.
\end{rem}
The organization of this paper is as follows. In Section~\ref{IWR}, we start by reducing the problem to integral WR (abbreviated IWR) lattices in Lemma~\ref{reduce}: a planar lattice ${\mathcal L}ambda = A{\mathbb Z}^2$ is called {\it integral} if the coefficient matrix $A^t A$ of its quadratic form $Q_A$ has integer entries (this definition does not depend on the choice of a basis). We then introduce zeta-functions of similarity classes of planar IWR lattices, objects of independent interest, and study their convergence properties in Theorem~\ref{IWR_zeta}. Our arguments build on the parameterization of planar IWR lattices obtained in \cite{fletcher_jones}. In Section~\ref{IWR_subl} we continue using this parameterization to obtain an explicit description of IWR sublattices of a fixed planar IWR lattice which are similar to another fixed IWR lattice (Theorem~\ref{two_IWR_1}), and use it to determine convergence properties of the Dirichlet series generating function of all such sublattices (Lemma~\ref{zeta_two}). Finally, in Lemma~\ref{zeta_two_1} we decompose $\zeta_{\operatorname{WR}}({\mathcal O}mega,s)$ for a fixed IWR planar lattice ${\mathcal O}mega$ into a sum over similarity classes of sublattices and observe that this sum can be represented as a product of the two different types of Dirichlet series that we investigated above; hence the result of Theorem~\ref{main} follows by Lemma~\ref{reduce}.
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol i}gskip
{\mathcal S}ection{Integral WR lattices in the plane}
\label{IWR}
Integral lattices are central objects in arithmetic theory of quadratic forms and in lattice theory. IWR lattices have recently been studied in \cite{fletcher_jones}. The significance of IWR planar lattices for our purposes is reflected in the following reduction lemma.
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{lem} \label{reduce} Let ${\mathcal O}mega$ be an arithmetic planar lattice. Then there exists some IWR planar lattice ${\mathcal L}ambda$ such that $\zeta_{\operatorname{WR}}({\mathcal O}mega,s)$ has the same abscissa of convergence with pole of the same order as $\zeta_{\operatorname{WR}}({\mathcal L}ambda,s)$.
\end{lem}
\boldsymbol\varphiroof Lemma 2.1 of \cite{kuehnlein} guarantees that ${\mathcal O}mega$ has a WR sublattice, call it ${\mathcal O}mega'$; naturally, ${\mathcal O}mega'$ must also be arithmetic. Let $A$ be a basis matrix for ${\mathcal O}mega'$, then entries of $A^tA$ span a 1-dimensional vector space over ${\mathbb Q}$, meaning that there exists $\alpha \in {\mathbb R}_{>0}$ such that the matrix $\alpha A^tA$ is integral. Then the lattice ${\mathcal L}ambda := {\mathcal S}qrt{\alpha} A {\mathbb Z}^2$ is integral and is similar to ${\mathcal O}mega'$, hence it is also WR. Since ${\mathcal L}ambda$ is just a scalar multiple of ${\mathcal O}mega'$, it is clear that $\zeta_{\operatorname{WR}}({\mathcal L}ambda,s)$ has the same abscissa of convergence with pole of the same order as $\zeta_{\operatorname{WR}}({\mathcal O}mega',s)$, which is the same as that of $\zeta_{\operatorname{WR}}({\mathcal O}mega,s)$ by Lemma~3.2 of~\cite{kuehnlein}.
\endproof
Moreover, it is easy to see that these properties of zeta-function of WR sublattices are preserved under similarity.
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{lem} \label{sim_red} Assume that ${\mathcal L}ambda_1, {\mathcal L}ambda_2$ are two planar lattices such that ${\mathcal L}ambda_1 {\mathcal S}im {\mathcal L}ambda_2$. Then $\zeta_{\operatorname{WR}}({\mathcal L}ambda_1,s) = \zeta_{\operatorname{WR}}({\mathcal L}ambda_2,s)$.
\end{lem}
\boldsymbol\varphiroof Similar lattices have the same numbers of WR sublattices of the same indices. The statement of the lemma follows immediately.
\endproof
| 4,019 | 26,309 |
en
|
train
|
0.4981.2
|
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{rem} \label{WR_growth} To compare, Theorem~4.20 of \cite{sautoy} combined with Lemma~3.3 (and the Corollary following it) of \cite{kuehnlein} imply that if ${\mathcal O}mega$ is a non-arithmetic planar lattice containing WR sublattices, then the right hand side of \eqref{growth_bnd_1} is equal to $O(N)$. It should be pointed out that by writing that a function of $N$ is equal to $O(N \log N)$ (respectively, $O(N)$) we mean here that it is asymptotically bounded from above and below by nonzero multiples of $N \log N$ (respectively,~$N$). On the other hand, it is a well known fact (outlined, for example, on p. 793 of \cite{sautoy}) that for any planar lattice~${\mathcal O}mega$,
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{growth_bnd_2}
\# \{ {\mathcal L}ambda {\mathcal S}ubseteq {\mathcal O}mega : \left| {\mathcal O}mega : {\mathcal L}ambda \right| \leq N\} {\mathcal S}im \left( \boldsymbol\varphii^2/12 \right) N^2
\end{equation}
as $N \to \infty$.
\end{rem}
The organization of this paper is as follows. In Section~\ref{IWR}, we start by reducing the problem to integral WR (abbreviated IWR) lattices in Lemma~\ref{reduce}: a planar lattice ${\mathcal L}ambda = A{\mathbb Z}^2$ is called {\it integral} if the coefficient matrix $A^t A$ of its quadratic form $Q_A$ has integer entries (this definition does not depend on the choice of a basis). We then introduce zeta-functions of similarity classes of planar IWR lattices, objects of independent interest, and study their convergence properties in Theorem~\ref{IWR_zeta}. Our arguments build on the parameterization of planar IWR lattices obtained in \cite{fletcher_jones}. In Section~\ref{IWR_subl} we continue using this parameterization to obtain an explicit description of IWR sublattices of a fixed planar IWR lattice which are similar to another fixed IWR lattice (Theorem~\ref{two_IWR_1}), and use it to determine convergence properties of the Dirichlet series generating function of all such sublattices (Lemma~\ref{zeta_two}). Finally, in Lemma~\ref{zeta_two_1} we decompose $\zeta_{\operatorname{WR}}({\mathcal O}mega,s)$ for a fixed IWR planar lattice ${\mathcal O}mega$ into a sum over similarity classes of sublattices and observe that this sum can be represented as a product of the two different types of Dirichlet series that we investigated above; hence the result of Theorem~\ref{main} follows by Lemma~\ref{reduce}.
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol i}gskip
{\mathcal S}ection{Integral WR lattices in the plane}
\label{IWR}
Integral lattices are central objects in arithmetic theory of quadratic forms and in lattice theory. IWR lattices have recently been studied in \cite{fletcher_jones}. The significance of IWR planar lattices for our purposes is reflected in the following reduction lemma.
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{lem} \label{reduce} Let ${\mathcal O}mega$ be an arithmetic planar lattice. Then there exists some IWR planar lattice ${\mathcal L}ambda$ such that $\zeta_{\operatorname{WR}}({\mathcal O}mega,s)$ has the same abscissa of convergence with pole of the same order as $\zeta_{\operatorname{WR}}({\mathcal L}ambda,s)$.
\end{lem}
\boldsymbol\varphiroof Lemma 2.1 of \cite{kuehnlein} guarantees that ${\mathcal O}mega$ has a WR sublattice, call it ${\mathcal O}mega'$; naturally, ${\mathcal O}mega'$ must also be arithmetic. Let $A$ be a basis matrix for ${\mathcal O}mega'$, then entries of $A^tA$ span a 1-dimensional vector space over ${\mathbb Q}$, meaning that there exists $\alpha \in {\mathbb R}_{>0}$ such that the matrix $\alpha A^tA$ is integral. Then the lattice ${\mathcal L}ambda := {\mathcal S}qrt{\alpha} A {\mathbb Z}^2$ is integral and is similar to ${\mathcal O}mega'$, hence it is also WR. Since ${\mathcal L}ambda$ is just a scalar multiple of ${\mathcal O}mega'$, it is clear that $\zeta_{\operatorname{WR}}({\mathcal L}ambda,s)$ has the same abscissa of convergence with pole of the same order as $\zeta_{\operatorname{WR}}({\mathcal O}mega',s)$, which is the same as that of $\zeta_{\operatorname{WR}}({\mathcal O}mega,s)$ by Lemma~3.2 of~\cite{kuehnlein}.
\endproof
Moreover, it is easy to see that these properties of zeta-function of WR sublattices are preserved under similarity.
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{lem} \label{sim_red} Assume that ${\mathcal L}ambda_1, {\mathcal L}ambda_2$ are two planar lattices such that ${\mathcal L}ambda_1 {\mathcal S}im {\mathcal L}ambda_2$. Then $\zeta_{\operatorname{WR}}({\mathcal L}ambda_1,s) = \zeta_{\operatorname{WR}}({\mathcal L}ambda_2,s)$.
\end{lem}
\boldsymbol\varphiroof Similar lattices have the same numbers of WR sublattices of the same indices. The statement of the lemma follows immediately.
\endproof
Lemmas~\ref{reduce} and~\ref{sim_red} imply that we can focus our attention on similarity classes of IWR lattices to prove Theorem~\ref{main}. Integrality is not preserved under similarity, however a WR similarity class may or may not contain integral lattices. WR similarity classes containing integral lattices, we will call them IWR similarity classes, have been studied in \cite{fletcher_jones} -- these are precisely the WR similarity classes containing arithmetic lattices. Let us write $\left< {\mathcal L}ambda \right>$ for the similarity class of the lattice ${\mathcal L}ambda$, then a result of \cite{fletcher_jones} states that the set of IWR similarity classes is
$${\mathcal I}WR = \left\{ \left< \Gamma_D(p,q) \right> : \Gamma_D(p,q) = \frac{1}{{\mathcal S}qrt{q}} {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q & p \\ 0 & r{\mathcal S}qrt{D} \end{pmatrix} {\mathbb Z}^2 \right\},$$
where $(p,r,q,D)$ are all positive integer 4-tuples satisfying
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{prqD}
p^2+Dr^2=q^2,\ \gcd(p,q)=1,\ \frac{p}{q} \leq \frac{1}{2}, \text{ and } D \text{ squarefree}.
\end{equation}
It is also discussed in \cite{fletcher_jones} that $\Gamma_D(p,q)$ is a {\it minimal} integral lattice with respect to norm in its similarity class. In particular, every integral lattice ${\mathcal L}ambda \in \left< \Gamma_D(p,q) \right>$ is of the form ${\mathcal L}ambda = {\mathcal S}qrt{k}\ U \Gamma_D(p,q)$ for some $k \in {\mathbb Z}_{>0}$, $U \in O_2({\mathbb R})$, and so
$$|{\mathcal L}ambda| \geq |\Gamma_D(p,q)| = q.$$
The set ${\mathcal I}WR$ can be represented as
$${\mathcal I}WR = {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol i}gsqcup_{D \in {\mathbb Z}_{>0} \text{ squarefree}} {\mathcal I}WR(D),$$
where for each fixed positive squarefree integer $D$, ${\mathcal I}WR(D) := \left\{ \left< \Gamma_D(p,q) \right> \right\}$ is the set of IWR similarity classes of {\it type} $D$.
Let us define the {\it minimum} and {\it determinant zeta-functions} of IWR similarity classes of type $D$ in the plane:
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{z^m_IWR}
\zeta^m_{{\mathcal I}WR(D)}(s) = {\mathcal S}um_{\left< \Gamma_D(p,q) \right> \in {\mathcal I}WR(D)} \frac{1}{|\Gamma_D(p,q)|^s} = {\mathcal S}um_{\left< \Gamma_D(p,q) \right> \in {\mathcal I}WR(D)} \frac{1}{q^s},
\end{equation}
and
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{z^d_IWR}
\zeta^d_{{\mathcal I}WR(D)}(s) = {\mathcal S}um_{\left< \Gamma_D(p,q) \right> \in {\mathcal I}WR(D)} \frac{1}{\operatorname{det} \Gamma_D(p,q)^s} = \frac{1}{D^{s/2}} {\mathcal S}um_{\left< \Gamma_D(p,q) \right> \in {\mathcal I}WR(D)} \frac{1}{r^s},
\end{equation}
where $s \in {\mathbb C}$. Since
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{rq}
\frac{{\mathcal S}qrt{3}}{2} \times \frac{1}{{\mathcal S}qrt{D}} \times q \leq r \leq \frac{1}{{\mathcal S}qrt{D}} \times q,
\end{equation}
we have
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{zeta_md_ineq}
\zeta^m_{{\mathcal I}WR(D)}(s) \leq \zeta^d_{{\mathcal I}WR(D)}(s) \leq \left( \frac{2}{{\mathcal S}qrt{3}} \right)^s \zeta^m_{{\mathcal I}WR(D)}(s)
\end{equation}
for all real $s$, and so $\zeta^m_{{\mathcal I}WR(D)}(s)$ and $\zeta^d_{{\mathcal I}WR(D)}(s)$ have the same convergence properties. We can establish the following result.
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{thm} \label{IWR_zeta} For every real value of $s > 1$,
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{zeta_bnd}
\frac{1}{\left( 2{\mathcal S}qrt{3D} \right)^s} \frac{\zeta(2s-1)}{\zeta(2s)} \leq \zeta^d_{{\mathcal I}WR(D)}(s) \leq \left( \frac{2}{{\mathcal S}qrt{3}} \right)^s \zeta^m_{{\mathcal I}WR(D)}(s) \leq \left( \frac{4D}{{\mathcal S}qrt{3}} \right)^s \zeta_{{\mathbb Q}({\mathcal S}qrt{-D})}(s),
\end{equation}
where $\zeta(s)$ is the Riemann zeta-function and $\zeta_{{\mathbb Q}({\mathcal S}qrt{-D})}(s)$ is the Dedekind zeta-function of the imaginary quadratic number field ${\mathbb Q}({\mathcal S}qrt{-D})$. Hence the Dirichlet series $\zeta^d_{{\mathcal I}WR(D)}(s)$ and $\zeta^m_{{\mathcal I}WR(D)}(s)$ are absolutely convergent for ${\mathcal R}e(s) > 1$, and for $s \in {\mathbb R}$ the limits
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{lim_md}
\lim_{s \to 1^+} (s-1) \zeta^d_{{\mathcal I}WR(D)}(s),\ \lim_{s \to 1^+} (s-1) \zeta^m_{{\mathcal I}WR(D)}(s)
\end{equation}
exist and are nonzero. Moreover, the $N$-th partial sums of coefficients of these Dirichlet series are equal to $O(N)$ as $N \to \infty$.
\end{thm}
\boldsymbol\varphiroof Let $D$ be a fixed positive squarefree integer. Lemma~1.3 of \cite{fletcher_jones} guarantees that $p,r,q \in {\mathbb Z}_{>0}$ satisfy \eqref{prqD} if and only if
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{mn_par}
p = \frac{| m^2-Dn^2 |}{2^e \gcd(m,D)},\ r = \frac{2mn}{2^e \gcd(m,D)},\ q = \frac{m^2 + Dn^2}{2^e \gcd(m,D)},
\end{equation}
for some $m,n \in {\mathbb Z}$ with $\gcd(m,n)=1$ and ${\mathcal S}qrt{\frac{D}{3}} \leq \frac{m}{n} \leq {\mathcal S}qrt{3D}$, where
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{e_def}
e = \left\{ {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{array}{ll}
0 & \mbox{if either $2 \mid D$, or $2 \mid (D+1), mn$} \\
1 & \mbox{otherwise.}
\end{array}
\right.
\end{equation}
Then
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{z_IWR_mn}
\zeta^m_{{\mathcal I}WR(D)}(s) = {\mathcal S}um_{{\mathcal S}ubstack{m,n \in {\mathbb Z}_{>0},\ \gcd(m,n)=1 \\ {\mathcal S}qrt{\frac{D}{3}} \leq \frac{m}{n} \leq {\mathcal S}qrt{3D}}} \left( \frac{2^e \gcd(m,D)}{m^2 + Dn^2} \right)^s,
\end{equation}
and so for each real $s > 1$,
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{eqnarray}
\label{z_IWR_mn_up}
\zeta^m_{{\mathcal I}WR(D)}(s) & \leq & (2D)^s {\mathcal S}um_{{\mathcal S}ubstack{m,n \in {\mathbb Z} {\mathcal S}etminus \{0\} \\ {\mathcal S}qrt{\frac{D}{3}} \leq \frac{m}{n} \leq {\mathcal S}qrt{3D}}} \frac{1}{ \left( m^2 + Dn^2 \right)^s} \nonumber \\
& \leq & \left( 2D \right)^s {\mathcal S}um_{m,n \in {\mathbb Z} {\mathcal S}etminus \{0\}} \frac{1}{ \left( m^2 + Dn^2 \right)^s} = \left( 2D \right)^s \zeta_{{\mathbb Q}({\mathcal S}qrt{-D})}(s).
\end{eqnarray}
Now, the Dedekind zeta-function of a number field converges absolutely for ${\mathcal R}e(s) > 1$ and has a simple pole at $s=1$.
| 3,971 | 26,309 |
en
|
train
|
0.4981.3
|
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{thm} \label{IWR_zeta} For every real value of $s > 1$,
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{zeta_bnd}
\frac{1}{\left( 2{\mathcal S}qrt{3D} \right)^s} \frac{\zeta(2s-1)}{\zeta(2s)} \leq \zeta^d_{{\mathcal I}WR(D)}(s) \leq \left( \frac{2}{{\mathcal S}qrt{3}} \right)^s \zeta^m_{{\mathcal I}WR(D)}(s) \leq \left( \frac{4D}{{\mathcal S}qrt{3}} \right)^s \zeta_{{\mathbb Q}({\mathcal S}qrt{-D})}(s),
\end{equation}
where $\zeta(s)$ is the Riemann zeta-function and $\zeta_{{\mathbb Q}({\mathcal S}qrt{-D})}(s)$ is the Dedekind zeta-function of the imaginary quadratic number field ${\mathbb Q}({\mathcal S}qrt{-D})$. Hence the Dirichlet series $\zeta^d_{{\mathcal I}WR(D)}(s)$ and $\zeta^m_{{\mathcal I}WR(D)}(s)$ are absolutely convergent for ${\mathcal R}e(s) > 1$, and for $s \in {\mathbb R}$ the limits
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{lim_md}
\lim_{s \to 1^+} (s-1) \zeta^d_{{\mathcal I}WR(D)}(s),\ \lim_{s \to 1^+} (s-1) \zeta^m_{{\mathcal I}WR(D)}(s)
\end{equation}
exist and are nonzero. Moreover, the $N$-th partial sums of coefficients of these Dirichlet series are equal to $O(N)$ as $N \to \infty$.
\end{thm}
\boldsymbol\varphiroof Let $D$ be a fixed positive squarefree integer. Lemma~1.3 of \cite{fletcher_jones} guarantees that $p,r,q \in {\mathbb Z}_{>0}$ satisfy \eqref{prqD} if and only if
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{mn_par}
p = \frac{| m^2-Dn^2 |}{2^e \gcd(m,D)},\ r = \frac{2mn}{2^e \gcd(m,D)},\ q = \frac{m^2 + Dn^2}{2^e \gcd(m,D)},
\end{equation}
for some $m,n \in {\mathbb Z}$ with $\gcd(m,n)=1$ and ${\mathcal S}qrt{\frac{D}{3}} \leq \frac{m}{n} \leq {\mathcal S}qrt{3D}$, where
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{e_def}
e = \left\{ {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{array}{ll}
0 & \mbox{if either $2 \mid D$, or $2 \mid (D+1), mn$} \\
1 & \mbox{otherwise.}
\end{array}
\right.
\end{equation}
Then
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{z_IWR_mn}
\zeta^m_{{\mathcal I}WR(D)}(s) = {\mathcal S}um_{{\mathcal S}ubstack{m,n \in {\mathbb Z}_{>0},\ \gcd(m,n)=1 \\ {\mathcal S}qrt{\frac{D}{3}} \leq \frac{m}{n} \leq {\mathcal S}qrt{3D}}} \left( \frac{2^e \gcd(m,D)}{m^2 + Dn^2} \right)^s,
\end{equation}
and so for each real $s > 1$,
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{eqnarray}
\label{z_IWR_mn_up}
\zeta^m_{{\mathcal I}WR(D)}(s) & \leq & (2D)^s {\mathcal S}um_{{\mathcal S}ubstack{m,n \in {\mathbb Z} {\mathcal S}etminus \{0\} \\ {\mathcal S}qrt{\frac{D}{3}} \leq \frac{m}{n} \leq {\mathcal S}qrt{3D}}} \frac{1}{ \left( m^2 + Dn^2 \right)^s} \nonumber \\
& \leq & \left( 2D \right)^s {\mathcal S}um_{m,n \in {\mathbb Z} {\mathcal S}etminus \{0\}} \frac{1}{ \left( m^2 + Dn^2 \right)^s} = \left( 2D \right)^s \zeta_{{\mathbb Q}({\mathcal S}qrt{-D})}(s).
\end{eqnarray}
Now, the Dedekind zeta-function of a number field converges absolutely for ${\mathcal R}e(s) > 1$ and has a simple pole at $s=1$.
On the other hand, for all real $s >1$,
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{eqnarray}
\label{z_IWR_mn_low}
\zeta^d_{{\mathcal I}WR(D)}(s) & \geq & {\mathcal S}um_{{\mathcal S}ubstack{m,n \in {\mathbb Z} {\mathcal S}etminus \{0\},\ \gcd(m,n)=1 \\ {\mathcal S}qrt{\frac{D}{3}} \leq \frac{m}{n} \leq {\mathcal S}qrt{3D}}} \frac{1}{\left( 2mn \right)^s} \nonumber \\
& \geq & \frac{1}{\left( 2{\mathcal S}qrt{3D} \right)^s} {\mathcal S}um_{n=1}^{\infty} \frac{a_n}{n^{2s}},
\end{eqnarray}
where $a_n$ is the cardinality of the set
$$S_n = \left\{ m \in {\mathbb Z}_{>0} : n{\mathcal S}qrt{\frac{D}{3}} \leq m \leq n {\mathcal S}qrt{3D},\ \gcd(m,n)=1 \right\}.$$
We will now produce a lower bound on $a_n$ for every $n \geq 1$. For each $m \in S_n$, let $s_n(m) = m \operatorname{mod} n$, then
$$a_n = |S_n| \geq \left| \left\{ s_n(m) : m \in S_n \right\} \right|.$$
Notice that
$${\mathcal S}qrt{3D} - {\mathcal S}qrt{D/3} = {\mathcal S}qrt{D} ({\mathcal S}qrt{3} - 1/{\mathcal S}qrt{3}) > 1$$
for each $D$, and hence
$$\left\{ s_n(m) : m \in S_n \right\} = \left\{ k \in {\mathbb Z} : 1 \leq k < n, \gcd(k,n) =1 \right\},$$
meaning that $a_n \geq \varphi(n)$, the Euler $\varphi$-function of $n$. Therefore
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{zeta_phi}
\zeta^d_{{\mathcal I}WR(D)}(s) \geq \frac{1}{\left( 2{\mathcal S}qrt{3D} \right)^s} {\mathcal S}um_{n=1}^{\infty} \frac{\varphi(n)}{n^{2s}} = \frac{1}{\left( 2{\mathcal S}qrt{3D} \right)^s} \frac{\zeta(2s-1)}{\zeta(2s)}
\end{equation}
for all real $s > 1$ by Theorem 288 of \cite{hardy}. The right hand side of \eqref{zeta_phi} converges absolutely for ${\mathcal R}e(s) > 1$ and has a simple pole at $s=1$. The inequality \eqref{zeta_bnd} now follows upon combining \eqref{zeta_md_ineq} with \eqref{z_IWR_mn_up} and~\eqref{zeta_phi}.
Since each Dirichlet series can be written in the form ${\mathcal S}um_{n=1} b_n n^{-s}$ for some coefficient sequence $\{b_n\}_{n=1}^{\infty}$, we will refer to ${\mathcal S}um_{n=1}^N b_n$ as its $N$-th partial sum of coefficients. Now Theorem~4.20 of \cite{sautoy} guarantees that the $N$-th partial sums of coefficients of Dirichlet series $\frac{1}{\left( 2{\mathcal S}qrt{3D} \right)^s} \frac{\zeta(2s-1)}{\zeta(2s)}$ and $\left( \frac{4D}{{\mathcal S}qrt{3}} \right)^s \zeta_{{\mathbb Q}({\mathcal S}qrt{-D})}(s)$ are equal to $O(N)$ as $N \to \infty$. Inequality \eqref{zeta_bnd} implies that the same must be true about $N$-th partial sums of coefficients of Dirichlet series $\zeta^m_{{\mathcal I}WR(D)}(s)$ and $\zeta^d_{{\mathcal I}WR(D)}(s)$, and that $\zeta^m_{{\mathcal I}WR(D)}(s)$ and $\zeta^d_{{\mathcal I}WR(D)}(s)$ are absolutely convergent for ${\mathcal R}e(s) > 1$ with limits in \eqref{lim_md} existing and nonzero for $s \in {\mathbb R}$. This finishes the proof of the theorem.
\endproof
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{rem} \label{ht_zeta} There is a connection between the zeta-function $\zeta^m_{{\mathcal I}WR(D)}(s)$ and the height zeta-function of the corresponding Pell-type rational conic. One can define a height function on points ${{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x} = (x_1, x_2,x_3) \in {\mathbb Z}^3$ as
$$H({{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}) = \frac{1}{\gcd(x_1,x_2,x_3)} \max_{1 \leq i \leq 3} |x_i|.$$
It is easy to see that $H$ is in fact projectively defined, and hence induces a function on a rational projective space. Let $D$ be a fixed positive squarefree integer, then the set of all integral points $(p,r,q)$ satisfying
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{int_con}
p^2+Dr^2=q^2,\ \gcd(p,r,q)=1,\ q > 0
\end{equation}
is precisely the set of all distinct representatives of projective rational points on the Pell-type conic
$$X_D({\mathbb Q}) = \{ [x,y,z] \in {\mathbb P}({\mathbb Q}^3) : x^2+Dy^2=z^2 \}.$$
For each point $[x,y,z] \in X_D({\mathbb Q})$ there is a unique $(p,r,q)$ satisfying \eqref{int_con}, and
$$H([x,y,z]) = H(p,r,q)=q.$$
Hence the height zeta-function of $X_D({\mathbb Q})$ is
$${\mathcal S}um_{[x,y,z] \in X_D({\mathbb Q})} \frac{1}{H([x,y,z])^s} = {\mathcal S}um_{(p,r,q) \text{ as in \eqref{int_con}}} \frac{1}{q^s},$$
where $s \in {\mathbb C}$.
\end{rem}
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol i}gskip
{\mathcal S}ection{IWR sublattices of IWR lattices}
\label{IWR_subl}
In this section we further investigate distribution properties of planar IWR lattices and prove Theorem~\ref{main}. Theorem~1.3 of \cite{fletcher_jones} guarantees that every IWR lattice of type $D$ contains IWR sublattices belonging to every similarity class of this type, and none others. Hence $\zeta^m_{{\mathcal I}WR(D)}(s)$ and $\zeta^d_{{\mathcal I}WR(D)}(s)$ are zeta-functions of minimal lattices over similarity classes of IWR sublattices of any IWR lattice of type $D$ in the plane. It will be convenient to define
$${\mathcal O}mega_D(p,q) = {\mathcal S}qrt{q}\ \Gamma_D(p,q) = {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q & p \\ 0 & r{\mathcal S}qrt{D} \end{pmatrix} {\mathbb Z}^2$$
for each $(p,r,q,D)$ satisfying \eqref{prqD}. Then for a fixed choice of $D,p_0,q_0$ the lattice ${\mathcal O}mega_D(p_0,q_0)$ contains IWR sublattices similar to each ${\mathcal O}mega_D(p,q)$. We will now describe explicitly how these sublattices look like. We start with a simple example of such lattices.
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{lem} \label{two_IWR} Let $(p,r,q,D)$ and $(p_0,r_0,q_0,D)$ satisfy \eqref{prqD}. Let
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{kmn}
k=m^2+Dn^2
\end{equation}
for some $m,n \in {\mathbb Z}$, not both zero, and let
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{U}
U = {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} \frac{m}{{\mathcal S}qrt{k}} & -\frac{n{\mathcal S}qrt{D}}{{\mathcal S}qrt{k}} \\ \frac{n{\mathcal S}qrt{D}}{{\mathcal S}qrt{k}} & \frac{m}{{\mathcal S}qrt{k}} \end{pmatrix}.
\end{equation}
Then $U$ is a real orthogonal matrix such that the lattice
$${\mathcal L}ambda = {\mathcal S}qrt{k}\ r_0q_0 U {\mathcal O}mega_D(p,q)$$
is an IWR sublattice of ${\mathcal O}mega_D(p_0,q_0)$ similar to ${\mathcal O}mega_D(p,q)$ with
$$\left| {\mathcal O}mega_D(p_0,q_0) : {\mathcal L}ambda \right| = r_0q_0rqk.$$
\end{lem}
| 3,764 | 26,309 |
en
|
train
|
0.4981.4
|
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{rem} \label{ht_zeta} There is a connection between the zeta-function $\zeta^m_{{\mathcal I}WR(D)}(s)$ and the height zeta-function of the corresponding Pell-type rational conic. One can define a height function on points ${{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x} = (x_1, x_2,x_3) \in {\mathbb Z}^3$ as
$$H({{{\boldsymbol 1}_Ldsymbol 0}ldsymbol x}) = \frac{1}{\gcd(x_1,x_2,x_3)} \max_{1 \leq i \leq 3} |x_i|.$$
It is easy to see that $H$ is in fact projectively defined, and hence induces a function on a rational projective space. Let $D$ be a fixed positive squarefree integer, then the set of all integral points $(p,r,q)$ satisfying
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{int_con}
p^2+Dr^2=q^2,\ \gcd(p,r,q)=1,\ q > 0
\end{equation}
is precisely the set of all distinct representatives of projective rational points on the Pell-type conic
$$X_D({\mathbb Q}) = \{ [x,y,z] \in {\mathbb P}({\mathbb Q}^3) : x^2+Dy^2=z^2 \}.$$
For each point $[x,y,z] \in X_D({\mathbb Q})$ there is a unique $(p,r,q)$ satisfying \eqref{int_con}, and
$$H([x,y,z]) = H(p,r,q)=q.$$
Hence the height zeta-function of $X_D({\mathbb Q})$ is
$${\mathcal S}um_{[x,y,z] \in X_D({\mathbb Q})} \frac{1}{H([x,y,z])^s} = {\mathcal S}um_{(p,r,q) \text{ as in \eqref{int_con}}} \frac{1}{q^s},$$
where $s \in {\mathbb C}$.
\end{rem}
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol i}gskip
{\mathcal S}ection{IWR sublattices of IWR lattices}
\label{IWR_subl}
In this section we further investigate distribution properties of planar IWR lattices and prove Theorem~\ref{main}. Theorem~1.3 of \cite{fletcher_jones} guarantees that every IWR lattice of type $D$ contains IWR sublattices belonging to every similarity class of this type, and none others. Hence $\zeta^m_{{\mathcal I}WR(D)}(s)$ and $\zeta^d_{{\mathcal I}WR(D)}(s)$ are zeta-functions of minimal lattices over similarity classes of IWR sublattices of any IWR lattice of type $D$ in the plane. It will be convenient to define
$${\mathcal O}mega_D(p,q) = {\mathcal S}qrt{q}\ \Gamma_D(p,q) = {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q & p \\ 0 & r{\mathcal S}qrt{D} \end{pmatrix} {\mathbb Z}^2$$
for each $(p,r,q,D)$ satisfying \eqref{prqD}. Then for a fixed choice of $D,p_0,q_0$ the lattice ${\mathcal O}mega_D(p_0,q_0)$ contains IWR sublattices similar to each ${\mathcal O}mega_D(p,q)$. We will now describe explicitly how these sublattices look like. We start with a simple example of such lattices.
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{lem} \label{two_IWR} Let $(p,r,q,D)$ and $(p_0,r_0,q_0,D)$ satisfy \eqref{prqD}. Let
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{kmn}
k=m^2+Dn^2
\end{equation}
for some $m,n \in {\mathbb Z}$, not both zero, and let
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{U}
U = {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} \frac{m}{{\mathcal S}qrt{k}} & -\frac{n{\mathcal S}qrt{D}}{{\mathcal S}qrt{k}} \\ \frac{n{\mathcal S}qrt{D}}{{\mathcal S}qrt{k}} & \frac{m}{{\mathcal S}qrt{k}} \end{pmatrix}.
\end{equation}
Then $U$ is a real orthogonal matrix such that the lattice
$${\mathcal L}ambda = {\mathcal S}qrt{k}\ r_0q_0 U {\mathcal O}mega_D(p,q)$$
is an IWR sublattice of ${\mathcal O}mega_D(p_0,q_0)$ similar to ${\mathcal O}mega_D(p,q)$ with
$$\left| {\mathcal O}mega_D(p_0,q_0) : {\mathcal L}ambda \right| = r_0q_0rqk.$$
\end{lem}
\boldsymbol\varphiroof As indicated in the proof of Theorem~1.3 of \cite{fletcher_jones},
$${{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q_0 & p_0 \\ 0 & r_0{\mathcal S}qrt{D} \end{pmatrix} {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} r_0q & r_0p-rp_0 \\ 0 & rq_0 \end{pmatrix} = r_0q_0 {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q & p \\ 0 & r{\mathcal S}qrt{D} \end{pmatrix},$$
and so $r_0q_0 {\mathcal O}mega_D(p,q)$ is a sublattice of ${\mathcal O}mega_D(p_0,q_0)$ of index $rq$. Now notice that
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{eqnarray*}
{\mathcal L}ambda & = & {\mathcal S}qrt{k} r_0q_0 {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} \frac{m}{{\mathcal S}qrt{k}} & -\frac{n{\mathcal S}qrt{D}}{{\mathcal S}qrt{k}} \\ \frac{n{\mathcal S}qrt{D}}{{\mathcal S}qrt{k}} & \frac{m}{{\mathcal S}qrt{k}} \end{pmatrix} {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q & p \\ 0 & r{\mathcal S}qrt{D} \end{pmatrix} {\mathbb Z}^2 \\
& = & {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q_0 & p_0 \\ 0 & r_0{\mathcal S}qrt{D} \end{pmatrix} {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} mr_0q - np_0q & m(r_0p-p_0r)-n(p_0p+Dr_0r) \\ nq_0q & mrq_0+npq_0 \end{pmatrix} {\mathbb Z}^2,
\end{eqnarray*}
and hence is a sublattice of ${\mathcal O}mega_D(p_0,q_0)$ of index $r_0q_0rqk$.
\endproof
Lemma~\ref{two_IWR} demonstrates some examples of sublattices of ${\mathcal O}mega_D(p_0,q_0)$ similar to ${\mathcal O}mega_D(p,q)$. We will now describe all such sublattices.
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{thm} \label{two_IWR_1} A sublattice ${\mathcal L}ambda$ of ${\mathcal O}mega_D(p_0,q_0)$ is similar to ${\mathcal O}mega_D(p,q)$ as above if and only if
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{subl}
{\mathcal L}ambda = {\mathcal S}qrt{Q_{p_0,q_0,p,q}(m,n)}\ U \Gamma_D(p,q),
\end{equation}
for some $m,n \in {\mathbb Z}$, not both zero, where $Q_{p_0,q_0,p,q}(m,n)$ is a positive definite binary quadratic form, given by \eqref{Q} below, and $U$ is a real orthogonal matrix as in \eqref{U_orth} with the angle $t$ satisfying \eqref{sin_cos}, where $x,y$ are as in \eqref{cong_sol_1} or \eqref{cong_sol_2}. In this case,
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{index}
\left| {\mathcal O}mega_D(p_0,q_0) : {\mathcal L}ambda \right| = \frac{rQ_{p_0,q_0,p,q}(m,n)}{r_0q_0}.
\end{equation}
\end{thm}
\boldsymbol\varphiroof By Theorem~1.1 of \cite{fletcher_jones}, ${\mathcal L}ambda {\mathcal S}im {\mathcal O}mega_D(p,q)$ if and only if
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{L1}
{\mathcal L}ambda = {\mathcal S}qrt{\frac{k}{q}}\ U {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q & p \\ 0 & r{\mathcal S}qrt{D} \end{pmatrix} {\mathbb Z}^2
\end{equation}
for some positive integer $k$ and a real orthogonal matrix
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{U_orth}
U = {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} \cos t & -{\mathcal S}in t \\ {\mathcal S}in t & \cos t \end{pmatrix} \text{ or } {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} \cos t & {\mathcal S}in t \\ {\mathcal S}in t & -\cos t \end{pmatrix}
\end{equation}
for some value of the angle $t$. On the other hand, ${\mathcal L}ambda {\mathcal S}ubset {\mathcal O}mega_D(p_0,q_0)$ if and only if
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{L2}
{\mathcal L}ambda = {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q_0 & p_0 \\ 0 & r_0{\mathcal S}qrt{D} \end{pmatrix} C {\mathbb Z}^2,
\end{equation}
where $C$ is an integer matrix. Therefore ${\mathcal L}ambda$ as in \eqref{L1} is a sublattice of ${\mathcal O}mega_D(p_0,q_0)$ if and only if it is of the form \eqref{L2} with
$$C = \alpha\ {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q(r_0{\mathcal S}qrt{D}\cos t - p_0{\mathcal S}in t) & (r_0p-rp_0){\mathcal S}qrt{D}\cos t - (pp_0+rr_0D){\mathcal S}in t \\ qq_0{\mathcal S}in t & q_0p{\mathcal S}in t + q_0r{\mathcal S}qrt{D}\cos t \end{pmatrix}$$
or
$$C = \alpha\ {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q(r_0{\mathcal S}qrt{D}\cos t - p_0{\mathcal S}in t) & (r_0p+rp_0){\mathcal S}qrt{D}\cos t - (pp_0-rr_0D){\mathcal S}in t \\ qq_0{\mathcal S}in t & q_0p{\mathcal S}in t - q_0r{\mathcal S}qrt{D}\cos t \end{pmatrix}$$
where $\alpha = \frac{{\mathcal S}qrt{k}}{q_0r_0 {\mathcal S}qrt{qD}}$. These conditions imply that we must have
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{sin_cos}
\cos t = \frac{xp_0+yq_0}{{\mathcal S}qrt{qk}},\ {\mathcal S}in t = \frac{xr_0{\mathcal S}qrt{D}}{{\mathcal S}qrt{qk}}
\end{equation}
for some integers $x,y$ satisfying one of the following two systems of congruences:
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{cong_1}
\left. {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{array}{ll}
q_0rx + (p_0r - r_0p)y \equiv 0 (\operatorname{mod} qr_0) \\
(p_0r + r_0p)x + q_0ry \equiv 0 (\operatorname{mod} qr_0)
\end{array}
\right\},
\end{equation}
or
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{cong_2}
\left. {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{array}{ll}
q_0rx + (p_0r + r_0p)y \equiv 0 (\operatorname{mod} qr_0) \\
(p_0r - r_0p)x + q_0ry \equiv 0 (\operatorname{mod} qr_0)
\end{array}
\right\}.
\end{equation}
| 3,605 | 26,309 |
en
|
train
|
0.4981.5
|
\boldsymbol\varphiroof By Theorem~1.1 of \cite{fletcher_jones}, ${\mathcal L}ambda {\mathcal S}im {\mathcal O}mega_D(p,q)$ if and only if
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{L1}
{\mathcal L}ambda = {\mathcal S}qrt{\frac{k}{q}}\ U {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q & p \\ 0 & r{\mathcal S}qrt{D} \end{pmatrix} {\mathbb Z}^2
\end{equation}
for some positive integer $k$ and a real orthogonal matrix
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{U_orth}
U = {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} \cos t & -{\mathcal S}in t \\ {\mathcal S}in t & \cos t \end{pmatrix} \text{ or } {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} \cos t & {\mathcal S}in t \\ {\mathcal S}in t & -\cos t \end{pmatrix}
\end{equation}
for some value of the angle $t$. On the other hand, ${\mathcal L}ambda {\mathcal S}ubset {\mathcal O}mega_D(p_0,q_0)$ if and only if
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{L2}
{\mathcal L}ambda = {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q_0 & p_0 \\ 0 & r_0{\mathcal S}qrt{D} \end{pmatrix} C {\mathbb Z}^2,
\end{equation}
where $C$ is an integer matrix. Therefore ${\mathcal L}ambda$ as in \eqref{L1} is a sublattice of ${\mathcal O}mega_D(p_0,q_0)$ if and only if it is of the form \eqref{L2} with
$$C = \alpha\ {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q(r_0{\mathcal S}qrt{D}\cos t - p_0{\mathcal S}in t) & (r_0p-rp_0){\mathcal S}qrt{D}\cos t - (pp_0+rr_0D){\mathcal S}in t \\ qq_0{\mathcal S}in t & q_0p{\mathcal S}in t + q_0r{\mathcal S}qrt{D}\cos t \end{pmatrix}$$
or
$$C = \alpha\ {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q(r_0{\mathcal S}qrt{D}\cos t - p_0{\mathcal S}in t) & (r_0p+rp_0){\mathcal S}qrt{D}\cos t - (pp_0-rr_0D){\mathcal S}in t \\ qq_0{\mathcal S}in t & q_0p{\mathcal S}in t - q_0r{\mathcal S}qrt{D}\cos t \end{pmatrix}$$
where $\alpha = \frac{{\mathcal S}qrt{k}}{q_0r_0 {\mathcal S}qrt{qD}}$. These conditions imply that we must have
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{sin_cos}
\cos t = \frac{xp_0+yq_0}{{\mathcal S}qrt{qk}},\ {\mathcal S}in t = \frac{xr_0{\mathcal S}qrt{D}}{{\mathcal S}qrt{qk}}
\end{equation}
for some integers $x,y$ satisfying one of the following two systems of congruences:
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{cong_1}
\left. {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{array}{ll}
q_0rx + (p_0r - r_0p)y \equiv 0 (\operatorname{mod} qr_0) \\
(p_0r + r_0p)x + q_0ry \equiv 0 (\operatorname{mod} qr_0)
\end{array}
\right\},
\end{equation}
or
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{cong_2}
\left. {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{array}{ll}
q_0rx + (p_0r + r_0p)y \equiv 0 (\operatorname{mod} qr_0) \\
(p_0r - r_0p)x + q_0ry \equiv 0 (\operatorname{mod} qr_0)
\end{array}
\right\}.
\end{equation}
First assume \eqref{cong_1} is satisfied. Notice that
$$\operatorname{det} {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{pmatrix} q_0r & p_0r-r_0p \\ p_0r+r_0p & q_0r \end{pmatrix} = (qr_0)^2 \equiv 0 (\operatorname{mod} qr_0),$$
which means that a pair $(x,y)$ solves the system \eqref{cong_1} if and only if it solves one of these two congruences. Hence it is enough to solve the first congruence of \eqref{cong_1}. Define $d_1 = \gcd(q_0r,qr_0)$ and $d_2 = \gcd(d_1,p_0r - r_0p)$, and let $a,b \in {\mathbb Z}$ be such that
$$aq_0r+bqr_0=d_1.$$
It now easily follows that the set of all possible solutions to \eqref{cong_1} is
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{cong_sol_1}
(x,y) = \left\{ \left( \frac{a(r_0p-p_0r)n}{d_2} + \frac{qr_0m}{d_1},\ \frac{d_1n}{d_2} \right) : n,m \in {\mathbb Z} \right\}.
\end{equation}
Combining \eqref{sin_cos} with \eqref{cong_sol_1}, we see that
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{qk1}
qk = \left( \frac{\left( ap_0(r_0p-p_0r) + d_1q_0 \right)n}{d_2} + \frac{qp_0r_0m}{d_1} \right)^2 + Dr_0^2 \left( \frac{a(r_0p-p_0r)n}{d_2} + \frac{qr_0m}{d_1} \right)^2.
\end{equation}
Then the right hand side of \eqref{qk1} is a positive definite integral binary quadratic form in the variables $m,n$:
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{eqnarray}
\label{Q1}
Q^1_{p_0,q_0,p,q}(m,n) & = & \left\{ \frac{q_0^2 q^2 r_0^2}{d_1^2} \right\} \times m^2 \nonumber \\
& + & \left\{ \frac{ a^2 (r_0p-p_0r)^2 q_0^2 + 2ad_1p_0q_0(r_0p-p_0r) + d_1^2q_0^2}{d_2^2} \right\} \times n^2 \nonumber \\
& + & \left\{ \frac{2a(r_0p-p_0r)qq_0^2r_0 + 2d_1qq_0r_0p_0}{d_1d_2} \right\} \times mn.
\end{eqnarray}
One can observe that all three coefficients of $Q^1_{p_0,q_0,p,q}(m,n)$ are divisible by $q$. Then define
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{Q}
Q_{p_0,q_0,p,q}(m,n) = \frac{1}{q} Q^1_{p_0,q_0,p,q}(m,n),
\end{equation}
which is again a positive definite integral binary quadratic form.
Now notice that the system of congruences in \eqref{cong_2} is the same as the one in \eqref{cong_1} with the order of equations reversed and the variables $x$ and $y$ reversed. Hence the solution set for \eqref{cong_2} is
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{cong_sol_2}
(x,y) = \left\{ \left( \frac{d_1n}{d_2},\ \frac{a(r_0p-p_0r)n}{d_2} + \frac{qr_0m}{d_1} \right) : n,m \in {\mathbb Z} \right\}.
\end{equation}
Combining \eqref{sin_cos} with \eqref{cong_sol_2}, we see that if \eqref{cong_2} is satisfied, then
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{qk2}
qk = \left( \frac{\left( aq_0(r_0p-p_0r)+d_1p_0 \right)n}{d_2} + \frac{qq_0r_0m}{d_1} \right)^2 + \frac{Dr_0^2d_1^2n^2}{d_2^2}.
\end{equation}
Then the right hand side of \eqref{qk2} is precisely $Q^1_{p_0,q_0,p,q}(n,m)$.
In either case, we have
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{k_value}
k = \frac{1}{q} Q^1_{p_0,q_0,p,q}(m,n) = Q_{p_0,q_0,p,q}(m,n)
\end{equation}
for some $m,n \in {\mathbb Z}$, not both zero. Then \eqref{subl} follows upon combining \eqref{L1} with \eqref{k_value}. Now we notice that
$$\left| {\mathcal O}mega_D(p_0,q_0) : {\mathcal L}ambda \right| = \frac{\operatorname{det} {\mathcal L}ambda}{\operatorname{det} {\mathcal O}mega_D(p_0,q_0)},$$
and so \eqref{index} follows from \eqref{subl}. This completes the proof of the lemma.
\endproof
Now define $S_D(p_0,q_0)$ to be the set of all IWR sublattices of ${\mathcal O}mega_D(p_0,q_0)$, and $S_D(p_0,q_0,p,q)$ to be the set of all IWR sublattices of ${\mathcal O}mega_D(p_0,q_0)$ which are similar to ${\mathcal O}mega_D(p,q)$. Then
$$S_D(p_0,q_0) = {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol i}gsqcup S_D(p_0,q_0,p,q).$$
Define
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{Z_D_1}
Z_{D,p_0,q_0,p,q}(s) = {\mathcal S}um_{{\mathcal L}ambda \in S_D(p_0,q_0,p,q)} \frac{1}{\left| {\mathcal O}mega_D(p_0,q_0) : {\mathcal L}ambda \right|^s}
\end{equation}
and
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{Z_D_2}
Z_{D,p_0,q_0}(s) = {\mathcal S}um_{{\mathcal L}ambda \in S_D(p_0,q_0)} \frac{1}{\left| {\mathcal O}mega_D(p_0,q_0) : {\mathcal L}ambda \right|^s} = {\mathcal S}um_{(p,q) \text{ as in \eqref{prqD}}} Z_{D,p_0,q_0,p,q}(s)
\end{equation}
for $s \in {\mathbb C}$.
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{lem} \label{zeta_two} For every squarefree positive integer $D$ and integer triples $(p_0,r_0,q_0)$ and $(p,r,q)$ satisfying \eqref{prqD}, the Dirichlet series $Z_{D,p_0,q_0,p,q}(s)$ is absolutely convergent for ${\mathcal R}e(s) > 1$. Moreover, it has analytic continuation to all of ${\mathbb C}$ except for a simple pole at $s=1$.
\end{lem}
\boldsymbol\varphiroof By Theorem~\ref{two_IWR_1},
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{eps1}
Z_{D,p_0,q_0,p,q}(s) = \left( \frac{r_0q_0}{r} \right)^s {\mathcal S}um_{(m,n) \in {\mathbb Z}^2 {\mathcal S}etminus \{ {{\boldsymbol 1}_Ldsymbol 0} \}} \frac{1}{Q_{p_0,q_0,p,q}(m,n)^s},
\end{equation}
where the sum on the right hand side of \eqref{eps1} is the Epstein zeta-function of the positive definite integral binary quadratic form $Q_{p_0,q_0,p,q}(m,n)$; it is known to converge absolutely for ${\mathcal R}e(s) > 1$ and has analytic continuation to all of ${\mathbb C}$ except for a simple pole at $s=1$ (this is a classical result, which can be found for instance in Chapter~5, \S 5 of \cite{koecher}; in fact, the authors of \cite{koecher} indicate that the existence of a simple pole at $s=1$ goes as far back as the work of Kronecker, 1889). The lemma follows.
\endproof
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{lem} \label{zeta_two_1} For every squarefree positive integer $D$ and integer triple $(p_0,r_0,q_0)$ satisfying \eqref{prqD}, the Dirichlet series $Z_{D,p_0,q_0}(s)$ is absolutely convergent for ${\mathcal R}e(s) > 1$ and for $s \in {\mathbb R}$ the limit
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{lim_ZD}
\lim_{s \to 1^+} (s-1)^2 Z_{D,p_0,q_0}(s)
\end{equation}
exists and is nonzero. Moreover, if we write $Z_{D,p_0,q_0}(s) = {\mathcal S}um_{n=1}^{\infty} b_n n^{-s}$, then the $N$-th partial sum of coefficients of $Z_{D,p_0,q_0}(s)$ is
$${\mathcal S}um_{n=1}^N b_n = O(N \log N)$$
as $N \to \infty$.
\end{lem}
| 3,997 | 26,309 |
en
|
train
|
0.4981.6
|
In either case, we have
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{k_value}
k = \frac{1}{q} Q^1_{p_0,q_0,p,q}(m,n) = Q_{p_0,q_0,p,q}(m,n)
\end{equation}
for some $m,n \in {\mathbb Z}$, not both zero. Then \eqref{subl} follows upon combining \eqref{L1} with \eqref{k_value}. Now we notice that
$$\left| {\mathcal O}mega_D(p_0,q_0) : {\mathcal L}ambda \right| = \frac{\operatorname{det} {\mathcal L}ambda}{\operatorname{det} {\mathcal O}mega_D(p_0,q_0)},$$
and so \eqref{index} follows from \eqref{subl}. This completes the proof of the lemma.
\endproof
Now define $S_D(p_0,q_0)$ to be the set of all IWR sublattices of ${\mathcal O}mega_D(p_0,q_0)$, and $S_D(p_0,q_0,p,q)$ to be the set of all IWR sublattices of ${\mathcal O}mega_D(p_0,q_0)$ which are similar to ${\mathcal O}mega_D(p,q)$. Then
$$S_D(p_0,q_0) = {{{\boldsymbol 1}_Ldsymbol 0}ldsymbol i}gsqcup S_D(p_0,q_0,p,q).$$
Define
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{Z_D_1}
Z_{D,p_0,q_0,p,q}(s) = {\mathcal S}um_{{\mathcal L}ambda \in S_D(p_0,q_0,p,q)} \frac{1}{\left| {\mathcal O}mega_D(p_0,q_0) : {\mathcal L}ambda \right|^s}
\end{equation}
and
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{Z_D_2}
Z_{D,p_0,q_0}(s) = {\mathcal S}um_{{\mathcal L}ambda \in S_D(p_0,q_0)} \frac{1}{\left| {\mathcal O}mega_D(p_0,q_0) : {\mathcal L}ambda \right|^s} = {\mathcal S}um_{(p,q) \text{ as in \eqref{prqD}}} Z_{D,p_0,q_0,p,q}(s)
\end{equation}
for $s \in {\mathbb C}$.
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{lem} \label{zeta_two} For every squarefree positive integer $D$ and integer triples $(p_0,r_0,q_0)$ and $(p,r,q)$ satisfying \eqref{prqD}, the Dirichlet series $Z_{D,p_0,q_0,p,q}(s)$ is absolutely convergent for ${\mathcal R}e(s) > 1$. Moreover, it has analytic continuation to all of ${\mathbb C}$ except for a simple pole at $s=1$.
\end{lem}
\boldsymbol\varphiroof By Theorem~\ref{two_IWR_1},
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{eps1}
Z_{D,p_0,q_0,p,q}(s) = \left( \frac{r_0q_0}{r} \right)^s {\mathcal S}um_{(m,n) \in {\mathbb Z}^2 {\mathcal S}etminus \{ {{\boldsymbol 1}_Ldsymbol 0} \}} \frac{1}{Q_{p_0,q_0,p,q}(m,n)^s},
\end{equation}
where the sum on the right hand side of \eqref{eps1} is the Epstein zeta-function of the positive definite integral binary quadratic form $Q_{p_0,q_0,p,q}(m,n)$; it is known to converge absolutely for ${\mathcal R}e(s) > 1$ and has analytic continuation to all of ${\mathbb C}$ except for a simple pole at $s=1$ (this is a classical result, which can be found for instance in Chapter~5, \S 5 of \cite{koecher}; in fact, the authors of \cite{koecher} indicate that the existence of a simple pole at $s=1$ goes as far back as the work of Kronecker, 1889). The lemma follows.
\endproof
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{lem} \label{zeta_two_1} For every squarefree positive integer $D$ and integer triple $(p_0,r_0,q_0)$ satisfying \eqref{prqD}, the Dirichlet series $Z_{D,p_0,q_0}(s)$ is absolutely convergent for ${\mathcal R}e(s) > 1$ and for $s \in {\mathbb R}$ the limit
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{lim_ZD}
\lim_{s \to 1^+} (s-1)^2 Z_{D,p_0,q_0}(s)
\end{equation}
exists and is nonzero. Moreover, if we write $Z_{D,p_0,q_0}(s) = {\mathcal S}um_{n=1}^{\infty} b_n n^{-s}$, then the $N$-th partial sum of coefficients of $Z_{D,p_0,q_0}(s)$ is
$${\mathcal S}um_{n=1}^N b_n = O(N \log N)$$
as $N \to \infty$.
\end{lem}
\boldsymbol\varphiroof Combining \eqref{Z_D_2}, \eqref{eps1}, and \eqref{rq} we obtain for every real $s>0$
$$Z_{D,p_0,q_0}(s) = (r_0q_0)^s {\mathcal S}um_{(p,r,q) \text{ as in \eqref{prqD}}} \left( \frac{1}{r^s} {\mathcal S}um_{(m,n) \in {\mathbb Z}^2 {\mathcal S}etminus \{ {{\boldsymbol 1}_Ldsymbol 0} \}} \frac{1}{Q_{p_0,q_0,p,q}(m,n)^s} \right).$$
Combining this observation with Theorem~\ref{IWR_zeta} implies that
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{eqnarray}
\label{eps2}
&\ & \left( \frac{r_0q_0}{2{\mathcal S}qrt{3}} \right)^s \frac{\zeta(2s-1)}{\zeta(2s)} \inf_{(p,r,q) \text{ as in \eqref{prqD}}} {\mathcal S}um_{(m,n) \in {\mathbb Z}^2 {\mathcal S}etminus \{ {{\boldsymbol 1}_Ldsymbol 0} \}} \frac{1}{Q_{p_0,q_0,p,q}(m,n)^s} \\
& \leq & \left( r_0q_0 {\mathcal S}qrt{D} \right)^s \zeta^d_{{\mathcal I}WR(D)}(s) \inf_{(p,r,q) \text{ as in \eqref{prqD}}} {\mathcal S}um_{(m,n) \in {\mathbb Z}^2 {\mathcal S}etminus \{ {{\boldsymbol 1}_Ldsymbol 0} \}} \frac{1}{Q_{p_0,q_0,p,q}(m,n)^s} \nonumber \\
& \leq & Z_{D,p_0,q_0}(s) \nonumber \\
&\leq & \left( r_0q_0 {\mathcal S}qrt{D} \right)^s \zeta^d_{{\mathcal I}WR(D)}(s) {\mathcal S}up_{(p,r,q) \text{ as in \eqref{prqD}}} {\mathcal S}um_{(m,n) \in {\mathbb Z}^2 {\mathcal S}etminus \{ {{\boldsymbol 1}_Ldsymbol 0} \}} \frac{1}{Q_{p_0,q_0,p,q}(m,n)^s} \nonumber \\
&\leq & \left( \frac{4r_0q_0D^{\frac{3}{2}}}{{\mathcal S}qrt{3}} \right)^s \zeta_{{\mathbb Q}({\mathcal S}qrt{-D})}(s) {\mathcal S}up_{(p,r,q) \text{ as in \eqref{prqD}}} {\mathcal S}um_{(m,n) \in {\mathbb Z}^2 {\mathcal S}etminus \{ {{\boldsymbol 1}_Ldsymbol 0} \}} \frac{1}{Q_{p_0,q_0,p,q}(m,n)^s}. \nonumber
\end{eqnarray}
Theorem~\ref{IWR_zeta} and Lemma~\ref{zeta_two} now imply that for each $(p,r,q)$ as in \eqref{prqD} the Dirichlet series
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{Dir1}
\left( \frac{r_0q_0}{2{\mathcal S}qrt{3}} \right)^s \frac{\zeta(2s-1)}{\zeta(2s)} {\mathcal S}um_{(m,n) \in {\mathbb Z}^2 {\mathcal S}etminus \{ {{\boldsymbol 1}_Ldsymbol 0} \}} \frac{1}{Q_{p_0,q_0,p,q}(m,n)^s}
\end{equation}
and
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol e}gin{equation}
\label{Dir2}
\left( \frac{4r_0q_0D^{\frac{3}{2}}}{{\mathcal S}qrt{3}} \right)^s \zeta_{{\mathbb Q}({\mathcal S}qrt{-D})}(s) {\mathcal S}um_{(m,n) \in {\mathbb Z}^2 {\mathcal S}etminus \{ {{\boldsymbol 1}_Ldsymbol 0} \}} \frac{1}{Q_{p_0,q_0,p,q}(m,n)^s}
\end{equation}
are absolutely convergent for ${\mathcal R}e(s) > 1$ and have analytic continuation to the half-plane ${\mathcal R}e(s) > 0$ except for a pole of order 2 at $s=1$. Then Theorem~4.20 of \cite{sautoy} implies that the $N$-th partial sums of coefficients of all the Dirichlet series as in \eqref{Dir1} and \eqref{Dir2} must be equal to $O(N \log N)$. Then \eqref{eps2} implies that the $N$-th partial sum of coefficients of $Z_{D,p_0,q_0}(s)$ is also $O(N \log N)$, and $Z_{D,p_0,q_0}(s)$ is absolutely convergent for ${\mathcal R}e(s) > 1$, where the limit of~\eqref{lim_ZD} exist and is nonzero for $s \in {\mathbb R}$.
\endproof
\boldsymbol\varphiroof[Proof of Theorem~\ref{main}] The theorem now follows upon combining Lemmas~\ref{reduce},~\ref{sim_red}, and~\ref{zeta_two_1}.
\endproof
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol i}gskip
{\bf Acknowledgment.} I would like to thank Stefan K\"uhnlein for providing me with a copy of his manuscript \cite{kuehnlein}, as well as for discussing the problem with me, and for his useful remarks about this paper. I would also like to thank Michael D. O'Neill and Bogdan Petrenko for their helpful comments on this paper. Finally, I would like to thank the referee for the suggestions and corrections which improved the quality of this paper.
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol i}gskip
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol i}bliographystyle{plain}
{{{\boldsymbol 1}_Ldsymbol 0}ldsymbol i}bliography{iwr_zeta}
\end{document}
| 3,056 | 26,309 |
en
|
train
|
0.4982.0
|
\begin{document}
\begin{frontmatter}
\title{Common fixed
points and endpoints of multi-valued generalized weak contraction
mappings}
\author{Congdian Cheng}
\address{College of Mathematics and Systems
Science, Shenyang Normal University,
Shenyang, 110034, China}
\begin{abstract}
Let $(X, d)$ be a complete metric space, and let $S, T :
X\rightarrow
CB(X)$ be a duality of multi-valued generalized
weak contraction mappings or a duality of generalized $\varphi$-weak
contraction mappings. We discuss the common fixed points and
endpoints of the two kinds of multi-valued weak mappings. Our
results extend and improve some results given by Daffer and Kaneko
(1995), Rouhani and Moradi (2010), and Moradi and Khojasteh (2011).
\end{abstract}
\begin{keyword} multi-valued mapping\sep weak contraction\sep
common fixed point\sep common endpoint\sep Hausdorff metric
\MSC 47H10 \sep 54C60
\end{keyword}
\end{frontmatter}
\section{Introduction}
\label{1}Let $(X, d)$ be a metric space and $CB(X)$ denote the class
of closed and bounded subsets of X. Also let $S, T : X\rightarrow
2^X$ be a multi-valued mapping. A point x is called a fixed point of
$T$ if $x\in Tx$. Define $Fix(T) = \{x\in X : x \in Tx\}$. An
element $x \in X$ is said to be an endpoint (or stationary point) of
a multi-valued mapping $T$ if $Tx = \{x\}$. We denote the set of all
endpoints of $T$ by $End(T)$.
A bivariate mapping $\phi: X\times X\rightarrow[0, +\infty)$ is
called compactly positive if $\inf\{\phi(x,y): a\leq d(x,y)\leq
b\}>0$ for each finite interval $[a, b]\subseteq (0, +\infty)$. A
mapping $T : X \rightarrow CB(X)$ is called weakly contractive if
there exists a compactly positive mapping $\phi$
such that
$$H(Tx, Ty)
\leq d(x, y)-\phi(x, y)$$ for each $x, y \in X$, where
$$H(A, B) :=\max\{ \sup\limits_{x\in B}
d(x, A), \sup\limits_{x\in A} d(x, B)\},$$ denoting the Hausdorff
metric on $CB(X)$. (see [1].)
A mapping $T : X \rightarrow CB(X)$ is called an generalized
$\varphi$-weak contraction if there exists a map $\varphi: [0,
+\infty)\rightarrow[0, +\infty)$ with $\varphi(0)=0$ and
$\varphi(t)<t$ for all $t>0$ such that
$$H(Sx, Ty)
\leq \varphi(N(x, y))$$
for all $x, y \in X$, where
$$N(x, y)
:= \max \{d(x, y), d(x, Tx), d(y, Ty), \frac{d(x, Ty) + d(y,
Tx)}{2}\}. $$
Two mappings $S, T : X \rightarrow CB(X)$ ($S, T : X \rightarrow X$)
are called a duality of generalized weak contractions if there
exists a bivariate mapping $\alpha: X\times X\rightarrow[0, 1)$ such
that
$$H(Sx, Ty)
\leq \alpha(x, y)M(x, y)$$ for all $x, y \in X$ (or equivalently, if
there exists a bivariate mapping $\phi: [0, +\infty)\rightarrow[0,
+\infty)$ with $\phi(0)=0$ and $\phi(t)>0$ for all $t>0$
such that
$$H(Sx, Ty)
\leq M(x, y)-\phi(x, y)$$ for each $x, y \in X$), where
$$\begin{array}{rcl}M(x, y): =
\max\{d(x,y), d(x, Sx), d(y, Ty), \frac{d(x, Ty)+d(y,
Sx)}{2}\}.\end{array}$$
Also, two mappings $S, T : X \rightarrow CB(X)$ are called a duality
of generalized $\varphi$-weak contractions if there exists a
bivariate mapping $\varphi: [0, +\infty)\rightarrow[0, +\infty)$
with $\varphi(0)=0$ and $\varphi(t)<t$ for all $t>0$ such that
$$H(Sx, Ty)
\leq \varphi(M(x, y))$$
for all $x, y \in X$ (or
equivalently, if there exists a bivariate mapping $\varphi: [0,
+\infty)\rightarrow[0, +\infty)$ with $\varphi(0)=0$ and
$\varphi(t)>0$ for all $t>0$ such that
$$H(Sx, Ty)
\leq M(x, y)-\varphi(M(x, y))$$ for all $x, y \in X$).
A mapping $T : X \rightarrow CB(X)$ has the approximate endpoint
property if $$\inf\limits_{x\in X}\sup\limits_{y\in Tx}d(x, y) =
0.$$
The fixed points for multi-valued contraction mappings have been
the subject of the research area on fixed points for more than forty
years, for example, see [1-5] and the references therein. The
investigation of endpoints of multi-valued mappings was made as
early as 30 years ago, and has received great attention in recent
years, see e.g. [5-10]. Among other studies, several important
results related closely to the present work are as follows.
First, in the following theorem, Nadler [2] (1969) extended the
Banach contraction principle to multi-valued mappings.\\
\noindent{\bf Theorem 1.1.} \textit{Let $(X, d)$ be a complete
metric space. Suppose that $T : X \rightarrow CB(X)$ is a
contraction mapping in the sense that for some $0\leq \alpha<1,
H(Tx, Ty) \leq \alpha d(x, y)$ for all $x, y \in X$. Then there
exists a point $x \in X$ such that $x \in Tx$.}\\
\noindent Then, Daffer and Kaneko [1] (1995) proved the next Theorem
1.2 and
Theorem 1.3.\\
\noindent{\bf Theorem 1.2} ([1, Theorem 3.3]) \textit{Let $(X, d)$
be a complete metric space. Suppose that $T : X \rightarrow CB(X)$
be such that $H(Tx, Ty) \leq \alpha N(x, y)$ for $0\leq\alpha<1$,
for all $x, y \in X$. If $x\rightarrow d(x, Tx)$ is lower
semicontinuous (l.s.c.), then there
exists a point $x_0\in X$ such that $x_0\in Tx_0$.}\\
\noindent{\bf Theorem 1.3} ([1, Theorem 2.3]). \textit{Let $(X, d)$
be a complete metric and $T : X \rightarrow CB(X)$ weakly
contractive. Assume that
$$\liminf
\limits_{ \beta\rightarrow 0}\frac{\lambda(\alpha, \beta)}{\beta} >
0 \hspace{3mm}(0<\alpha\leq\beta),$$ where $\lambda(\alpha,
\beta)=\inf\{\phi(x, y)|x, y\in X, \alpha\leq d(x, y)\leq\beta\}$
for each finite interval $[\alpha, \beta]\subset(0, \infty)$. Then
$T$
has a fixed point in $X$.}\\
\noindent Lately, Zhang and Song [3, Theorem 2.1] (2009) proved a
theorem on the existence of a common fixed point for a duality of
two single valued generalized $\varphi$-weak contraction mappings.
By extending two single valued mappings in the Theorem of Zhang
and Song [3] to two multi-valued mappings, and By extending one
multi-valued mapping in Theorem 1.2 to a duality of multi-valued
mappings, Rouhani and Moradi [4] (2010) proved the following
coincidence theorem, without assuming $x\longrightarrow d(x, Tx)$ or
$x\rightarrow d(x, Sx)$ to be l.s.c.\\
\noindent{\bf Theorem 1.4} ([4, Theorem 3.1]). \textit{Let $(X, d)$
be a complete metric space, and let $T, S : X \rightarrow CB(X)$ be
two multivalued mappings such that for all $x, y\in X, H(Tx,Sy) \leq
\alpha M(x, y)$, where $0 \leq \alpha <1$. Then there exists a point
$x \in X$ such that $x \in Tx$ and $x \in Sx$ (i.e., $T$ and $S$
have a common fixed point). Moreover, if either $T$ or $S$ is single
valued, then
this common fixed point is unique.}\\\\
Further they also proved the Theorem 1.5 below.\\
\noindent{\bf Theorem 1.5} ([4, Theorem 4.1]). \textit{Let $(X, d)$
be a complete metric space and let $T : X \rightarrow X$ and $S : X
\rightarrow CB(X)$ be two mappings such that for all $x, y \in X$,
$$H({Tx}, Sy)\leq M(x, y)-\varphi(M(x, y)),$$
where $\varphi: [0,
+\infty)\rightarrow[0, +\infty)$ is l.s.c. with $\varphi(0)=0$ and
$\varphi(t)>0$ for all $t>0$. Then there exists a unique point
$x \in X$ such that $Tx = x \in Sx$.}\\
\noindent Finally, for the endpoint of multi-valued mappings,
Amini-Harandi [6] (2010) proved Theorem 1.6 below.\\
\noindent{\bf
Theorem 1.6} ([6, Theorem 2.1]).\textit{Let $(X, d)$ be a complete
metric space and $T$ be a multi-valued mapping that satisfies
$$H(Tx, Ty)
\leq \varphi(d(x, y)),$$ for each $x, y \in X$ , where $\varphi :
[0,+\infty) \rightarrow [0,+\infty)$ is upper semicontinuous
(u.s.c.), $\varphi(t) < t$ for each $t
> 0$ and satisfies $\liminf \limits_{
t\rightarrow \infty }(t - \varphi(t)) > 0$. Then $T$ has a unique
endpoint if and only if T has the approximate endpoint property.}\\
\noindent Moradi and Khojasteh [7] (2011) extended the result of
Amini-Harandi
to the following theorem 1.7.\\
\noindent{\bf Theorem 1.7} ([7, Theorem 2.1]). \textit{Let $(X, d)$
be a complete metric space and $T$ be a multi-valued mapping that
satisfies
$$H(Tx, Ty)
\leq \varphi(N(x, y)), $$ for each $ x, y \in X$ , where $\varphi :
[0,+\infty) \rightarrow [0,+\infty)$ is u.s.c. with $\varphi(t) < t$
for all $t
> 0$ and $\liminf \limits_{
t\rightarrow \infty }(t - \varphi(t)) > 0$. Then $T$ has a unique
endpoint if and only if T has the approximate endpoint property.}\\
Motivated by the contributions stated above, the present work make a
further study on the common fixed point for a duality of generalized
weak ($\varphi$-weak) contractions, and also make a study on the
common endpoint for a duality of generalized weak ($\varphi$-weak)
contractions. Our contributions extend the results of Theorem 1.3,
Theorem 1.4, Theorem 1.5 and Theorem 1.7.
| 3,123 | 16,866 |
en
|
train
|
0.4982.1
|
\section{Preliminaries}
\label{1} This section proposes several Lemmas for our posterior
discussions.\\
\noindent{\bf Lemma 2.1.} \textit{Let $(X, d)$ be a complete metric
space
and $S, T : X\rightarrow CB(X)$ are a duality of generalized weak
(or
$\varphi$-weak) contractions. Then $Fix(S)=Fix(T)$.}\\
\noindent{\bf Proof}. Let $x\in Fix(S)$. Then
$$\begin{array}{rcl}& &d(x, Tx)\leq H(Sx,Tx)\leq\alpha(x,x)M(x,x)\\
&=&\alpha(x,x)\max\{d(x,x), d(x,Sx), d(x,Tx), \frac{d(x,Tx)+d(x,Sx)
}{2}\}\\
&=&\alpha(x,x)d(x, Tx).\end{array}$$ Since $\alpha(x,x)<1$, this
implies $d(x,Tx)=0$.
That is, $x\in Fix(T)$. Hence, $Fix(S)=Fix(T)$.\\
\noindent{\bf Lemma 2.2.} \textit{Let $(X, d)$ be a complete metric
space, $\gamma\in[0,1)$,
and $x_{n}$ be a sequence of $X$ that satisfies
$$\begin{array}{rcl}d(x_n, x_{n+1}) \leq\gamma d(x_{n-1},
x_n)+\frac{1}{2^n}
\end{array}\eqno (2.1)$$
for all $n \in {\mathbb{N}}$ ($x_0 \in X$). Then $\{x_n\}$ is
convergent.}\\
\noindent{\bf Proof}. By (2.1), for each $n\in {\mathbb{N}}$,
$$\begin{array}{rcl} d(x_n, x_{n+1})&\leq &\gamma d(x_{n-1},
x_n)+\frac{1}{2^n}\\&\leq&\gamma[\gamma d(x_{n-2},
x_{n-1})+\frac{1}{2^{n-1}}]+\frac{1}{2^n}\\&=&\gamma^2 d(x_{n-2},
x_{n-1})+\frac{\gamma}{2^{n-1}}+\frac{1}{2^n}\\& &\cdots\\&\leq&
\gamma^n d(x_{0}, x_{1})+\frac{\gamma^{n-1}}{2^{1}}+\cdots
+\frac{\gamma^1}{2^{n-1}}+\frac{\gamma^0}{2^n}\\&\leq&\frac{M}{1-\gamma}
(\frac{\gamma^n}{2^0}+\frac{\gamma^{n-1}}{2^{1}}+\cdots
+\frac{\gamma^1}{2^{n-1}}+\frac{\gamma^0}{2^n}),
\end{array}$$
where $M=\max\{d(x_{0}, x_{1}),1\}$. Without loss of generality,
assume $M=1$. Then
$$\begin{array}{rcl} d(x_n, x_{n+1})\leq
\frac{\gamma^n}{2^0}+\frac{\gamma^{n-1}}{2^{1}}+\cdots
+\frac{\gamma^1}{2^{n-1}}+\frac{\gamma^0}{2^n}.
\end{array}\eqno
(2.2)$$ By (2.2), for any $n, m \in {\mathbb{N}}$, we have
$$\begin{array}{rcl}& & d(x_n, x_{n+m})\\&\leq& d(x_n, x_{n+1})+d(x_{n+1},
x_{n+2})+\cdots+d(x_{n+m-1},
x_{n+m})\\&\leq&\{[\frac{\gamma^n}{2^0}+\frac{\gamma^{n-1}}{2^1}
+\frac{\gamma^{n-2}}{2^2}+\cdots+\frac{\gamma^{0}}{2^n}]\\&
&+[\frac{\gamma^{n+1}}{2^0}+\frac{\gamma^{n}}{2^1}
+\frac{\gamma^{n-1}}{2^2}+\cdots+\frac{\gamma^{1}}{2^n}+\frac{\gamma^{0}}{2^{n+1}}]\\&
&\cdots\\& &+ [\frac{\gamma^{n+m-1}}{2^0}+\frac{\gamma^{n+m-2}}{2^1}
+\frac{\gamma^{n+m-3}}{2^2}+\cdots\\&
&+\frac{\gamma^{m-1}}{2^n}+\frac{\gamma^{m-2}}{2^{n+1}}+\cdots+
\frac{\gamma^{0}}{2^{n+m-1}}]\}\\&=&\{[\frac{1}{2^0}(\gamma^{n}+\gamma^{n+1}+
\cdots+\gamma^{n+m-1})+\\& &\frac{1}{2^1}(\gamma^{n-1}+\gamma^{n}+
\cdots+\gamma^{n+m-2})+\cdots\\&
&+\frac{1}{2^n}(\gamma^{0}+\gamma^{1}+
\cdots+\gamma^{m-1})]+[\frac{1}{2^{n+1}}(\gamma^{0}+\gamma^{1}+
\cdots+\gamma^{m-2})\\& &+\frac{1}{2^{n+2}}(\gamma^{0}+\gamma^{1}+
\cdots+\gamma^{m-3})\\&
&+\cdots+\frac{1}{2^{n+m-2}}(\gamma^{0}+\gamma^{1})+\frac{1}{2^{n+m-1}}(\gamma^{0})]\}
\\&=&\{[\frac{1}{2^0}(\frac{\gamma^{n}-\gamma^{n+m}}{1-\gamma})+
\frac{1}{2^1}(\frac{\gamma^{n-1}-\gamma^{n+m-1}}{1-\gamma})+\cdots+
\frac{1}{2^n}(\frac{\gamma^{0}-\gamma^{m-1}}{1-\gamma})]\\&
&+[\frac{1}{2^{n+1}}(\frac{\gamma^{0}-\gamma^{m-1}}{1-\gamma})+
\frac{1}{2^{n+2}}(\frac{\gamma^{0}-\gamma^{m-2}}{1-\gamma})+\cdots+
\frac{1}{2^{n+m-1}}(\frac{\gamma^{0}-\gamma^{1}}{1-\gamma})]\}
\\&<&\frac{1}{(1-\gamma)}[(\frac{\gamma^{n}}{2^0}+
\frac{\gamma^{n-1}}{2^1}+\cdots+ \frac{\gamma^{0}}{2^n})\\&
&+(\frac{1}{2^{n+1}}+ \frac{1}{2^{n+2}}+\cdots+
\frac{1}{2^{n+m-1}})]\\&=
&\frac{1}{(1-\gamma)}\{\gamma^{n}[\frac{1}{(2\gamma)^0}+
\frac{1}{(2\gamma)^1}+\cdots+
\frac{1}{(2\gamma)^n}]+\frac{\frac{1}{2^{n+1}}-\frac{1}{2^{n+m}}}{1-\frac{1}{2}}\}
\\&<&\frac{1}{(1-\gamma)}\{\gamma^{n}[\frac{1-\frac{1}{(2\gamma)^{n+1}}}{1-\frac{1}{(2\gamma)}}]
+\frac{1}{2^{n}}\}.\end{array}\eqno (2.3)$$ In terms of (2.3), if
$(2\gamma)>1$, then
$$\begin{array}{rcl}& & d(x_n, x_{n+m})\\&<& \frac{1}{(1-\gamma)}
\{\gamma^n[\frac{2\gamma}{2\gamma-1}]+\frac{1}{2^n}\}=\frac{1}{(1-\gamma)}\cdot
\frac{(2\gamma)^{n+1}+2\gamma-1}{(2\gamma-1)2^n}\\&<&\frac{2\gamma
}{(2\gamma-1)(1-\gamma)}\cdot (\gamma^n+\frac{1}{2^n})<\frac{4
\gamma^{n+1}}{(2\gamma-1)(1-\gamma)}.\end{array}\eqno (2.4)$$
Otherwise, $(2\gamma)<1$, we have
$$\begin{array}{rcl}& & d(x_n, x_{n+m})\\&=& \frac{1}{(1-\gamma)}
[\gamma^n\cdot\frac{1-(2\gamma)^{n+1}}{(2\gamma)^{n}-(2\gamma)^{n+1}}+\frac{1}{2^n}]<
\frac{1}{(1-\gamma)}
[\frac{\gamma^n}{(2\gamma)^{n}-(2\gamma)^{n+1}}+\frac{1}{2^n}]\\&=&\frac{1}{(1-\gamma)}
[\frac{1}{2^{n}(1-2\gamma)}+\frac{1}{2^n}]=\frac{1}{(1-\gamma)(1-2\gamma)2^{n-1}}.\end{array}\eqno
(2.5)$$
From (2.4) and (2.5), we can easily know that the sequence $\{x_n\}$
is a Cauchy sequence. So it is convergent. This ends the proof.
$\square$\\
\noindent{\bf Lemma 2.3.} \textit{Let $(X, d)$ be a complete metric
space
and $S, T : X\rightarrow CB(X)$ are a duality of generalized weak
contractions. Let also $\{x_{n}\}$ be a convergent sequence of $X$
that satisfies $x_{n+1}\in Sx_{n}$ for each even $n \in
{\mathbb{N}}$, $\lim\limits_{n\rightarrow\infty}x_{n}=x^\ast$ and
$\limsup\limits_{k\rightarrow\infty}\alpha(x_{2k}, x^\ast)<1$. Then
$x^\ast\in Fix(T)=Fix(S)$.}\\
\noindent{\bf Proof}. In terms of the conditions, for each even $n
\in {\mathbb{N}}$, we have
$$\begin{array}{rcl}d(x_{n+1}, Tx^\ast) &\leq& H(Sx_{n}, Tx^\ast)
\leq\alpha(x_{n}, x^\ast)M(x_{n},
x^\ast);
\end{array}\eqno
(2.6)$$
$$\begin{array}{rcl}& &M(x_{n}, x^\ast)\\&=&\max\{d(x_{n}, x^\ast), d(x_{n}, Sx_{n}),
d(x^\ast, Tx^\ast),\\& & \frac{d(x_{n}, Tx^\ast)+d(x^\ast,
Sx_{n})}{2}\}\\&\leq&\max\{d(x_{n}, x^\ast), d(x_{n}, x_{n+1}),
d(x^\ast, Tx^\ast),\\& & \frac{d(x_{n}, x^\ast)+d(x^\ast,
Tx^\ast)+d(x^\ast, x_{n})+d(x_{n},
Sx_{n})}{2}\}\\&\leq&\max\{d(x_{n}, x^\ast), d(x_{n}, x_{n+1}),
d(x^\ast, Tx^\ast),\\& &d(x_{n}, x^\ast)+ \frac{d(x^\ast,
Tx^\ast)+d(x_{n}, x_{n+1})}{2}\}.
\end{array}\eqno
(2.7)$$Note that $\lim\limits_{n\rightarrow\infty}x_{n}=x^\ast$.
Combining (2.6) and (2.7), we further obtain
$$\begin{array}{rcl}d(x^\ast, Tx^\ast)&\leq&[\limsup\limits_{k\rightarrow\infty}
\alpha(x_{2k}, x^\ast)]\limsup\limits_{k\rightarrow\infty}M(x_{2k},
x^\ast)
\\&\leq&[\limsup\limits_{k\rightarrow\infty}\alpha(x_{2k}, x^\ast)]d(x^\ast,
Tx^\ast).
\end{array}$$
Since $\limsup\limits_{k\rightarrow\infty}\alpha(x_{2k}, x^\ast)<1$, this
implies $d(x^\ast, Tx^\ast)=0$. That is $x^\ast\in Tx^\ast$. The
proof
completes. $\square$\\
\noindent{\bf Lemma 2.4.} \textit{Let $(X, d)$ be a complete metric
space
and $S, T : \rightarrow CB(X)$ are a duality of generalized weak
contractions. Then we have the the conclusions as follows.\\
(1) $End(S)=End(T) (\subseteq Fix(S)=Fix(T))$ and $|End(S)|\leq 1$.
Here $|End(S)|$ denotes the cardinal number of $End(S)$. (This
implies that $S$ and $T$ have an unique common endpoint, or have no
endpoint.)\\
(2) If $S$ and $T$ have common endpoint, then $\inf\limits_{x\in
X}[H(\{x\}, Sx)+H(\{x\}, Tx)]$=0, termed as the
approximate endpoint property of duality $S$ and $T$.\\
(3) If either $S$ or $T$ is single valued, then
$End(S)=End(T)=Fix(S)=Fix(T)$. (This implies that the fixed points
of $S$ and
$T$ must be endpoints.)}\\
\noindent{\bf Proof}. Let $x\in End(S)$. Then $x\in
Fix(S)=Fix(T)$ from Lemma 2.1. This implies $M(x,x)=0$. Therefore,
we have
$$\begin{array}{rcl}& &H(\{x\}, Tx)= H(Sx,Tx)\leq\alpha(x,x)M(x,x)=0.\end{array}$$
This means $Tx=\{x\}$. That is, $x\in End(T)$. Hence
$End(S)=End(T)$.
Let $x,y\in End(S)=End(T)$. Then $M(x,y)=d(x,y)$, further
$$d(x,y)=H(\{x\}, \{y\})=
H(Sx,Ty)\leq\alpha(x,y)M(x,y)=\alpha(x,y)d(x,y).$$ For
$\alpha(x,y)<1$, this implies $d(x,y)=0$. That is $x=y$. Hence
$|End(S)|\leq 1$.
We have proved (1). (2) is obvious. Next we further prove (3).
Suppose that one of $S$ and $T$ is single valued. Without loss of
generality, we assume $S$ is single valued. Then it is obvious that
$End(S)=Fix(S)$. So $End(T)=End(S)=Fix(S)=Fix(T)$. This ends the
proof. $\square$
| 3,689 | 16,866 |
en
|
train
|
0.4982.2
|
\section{Fixed
point theory} \label{1} In the section, we focus on studying the
fixed point theory.
We are now in a position to prove our first theorem, which extends
Theorem 2.3 of Daffer and Kaneko [1] by generalizing one mapping
$T$ to two mappings $S$ and $T$, and by improving the other
conditions, which also extends Theorem 3.1 of Rouhani and Moradi [4]
by replacing the constant contraction factor $\alpha$ with an
general
$\alpha(x, y)$.\\
\noindent{\bf Theorem 3.1.} \textit{Let $(X, d)$ be a complete
metric space
and $S, T : \rightarrow CB(X)$ are a duality of generalized weak
contractions that satisfies
$$\sup\{\alpha(x_{2k-2},
x_{2k-1}),\alpha(x_{2k},
x_{2k-1})|k\in{\mathbb{N}}\}<1 \eqno (3.1)$$
for any sequence $\{x_n\}$ of $X$ with $\{d(x_{n}, x_{n+1})\}$ to
be monotone decreasing, and $\alpha$ is
u.s.c. (or
$\limsup\limits_{n\rightarrow\infty}\alpha(x_{n}, x^\ast)<1$ if
$\lim\limits_{n\rightarrow\infty}x_{n}=x^\ast$).
Then $Fix(S)=Fix(T)\neq\emptyset$.}\\
\noindent{\bf Proof.} (1) By Lemma 2.1, $Fix(S)=Fix(T)$. To complete
the proof, what we need is only to prove
$Fix(S)=Fix(T)\neq\emptyset$. Arguing by contradiction, we assume
$Fix(S)=Fix(T)=\emptyset$.
\noindent(2) Let $x_0 \in X$. Then $d(x_0,Sx_0)>0$. It is obvious
that we can choose a $x_1 \in Sx_0$ such that $0<d(x_0,
x_1)<d(x_0,Sx_0)+1$, and $d(x_1,Tx_1)>0$.
Let $\varepsilon_1=\min\{\frac{1}{2}, [1-\alpha(x_0, x_1)]d(x_0,
x_1)\}$. Then there exists a $x_2 \in Tx_1$ such that $0<d(x_1,
x_2)<d(x_1,Tx_1)+\varepsilon_1$. Let
$\varepsilon_2=\min\{\frac{1}{2^2}, [1-\alpha(x_2, x_1)]d(x_1,
x_2)\}$. Then there exists a $x_3 \in Sx_2$ such that $0<d(x_2,
x_3)<d(x_2,Sx_2)+\varepsilon_2$. Inductively, we have the general
fact as follows.
For each $k\in {\mathbb{N}}$, let
$$\varepsilon_{2k-1}=\min\{\frac{1}{2^{2k-1}}, [1-\alpha(x_{2k-2},
x_{2k-1})]d(x_{2k-2}, x_{2k-1})\}.$$ Then there exists a $x_{2k} \in
Tx_{2k-1}$ such that
$$0<d(x_{2k-1},
x_{2k})<d(x_{2k-1},Tx_{2k-1})+\varepsilon_{2k-1}\leq
d(x_{2k-1},Tx_{2k-1})+\frac{1}{2^{2k-1}}.$$ Let also
$$\varepsilon_{2k}=\min\{\frac{1}{2^{2k}}, [1-\alpha(x_{2k},
x_{2k-1})]d(x_{2k-1}, x_{2k})\}.$$ Then there exists a $x_{2k+1} \in
Sx_{2k}$ such that $$0<d(x_{2k},
x_{2k+1})<d(x_{2k},Sx_{2k})+\varepsilon_{2k}\leq
d(x_{2k},Sx_{2k})+\frac{1}{2^{2k}}.$$
\noindent(3) For the sequence $\{x_{n}\}$ constructed above,
$\forall n \in {\mathbb{N}}$, when $n$ is odd, we have
$$[1-\alpha(x_{n-1}, x_{n})]d(x_{n-1}, x_{n})\geq
\varepsilon_{n}.\eqno (3.2)$$ Further,
$$\begin{array}{rcl}d(x_n, x_{n+1})&< &
d(x_n, Tx_{n})+\varepsilon_{n} \leq H(Sx_{n-1},
Tx_{n})+\varepsilon_{n} \\&\leq&\alpha(x_{n-1}, x_n)M(x_{n-1},
x_n)+\varepsilon_{n}\\&\leq&\alpha(x_{n-1}, x_n)M(x_{n-1},
x_n)+\frac{1}{2^n},
\end{array}\eqno (3.3)$$
$$\begin{array}{rcl} & &
M(x_{n-1}, x_n)\\
& \leq& \max\{d(x_{n-1}, x_n), d(x_{n-1}, Sx_{n-1}), d(x_n,
Tx_n),\\& & \frac{d(x_{n-1}, Tx_n) + d(x_{n}, Sx_{n-1})}{2}\}\\&\leq
& \max\{d(x_{n-1}, x_n), d(x_{n-1}, x_{n}), d(x_n, x_{n+1}),\\& &
\frac{d(x_{n-1}, x_n) +d(x_{n}, Tx_{n})}{2}\} \\& \leq&
\max\{d(x_{n-1}, x_n), d(x_n, x_{n+1}), \frac{d(x_{n-1}, x_n)
+d(x_{n}, x_{n+1})}{2}\}\\ & =& \max\{d(x_{n-1}, x_n), d(x_n,
x_{n+1})\}.
\end{array}\eqno (3.4)$$
If $\max\{d(x_{n-1}, x_n), d(x_n, x_{n+1})\}=d(x_n, x_{n+1})$, i.e.
$d(x_{n-1}, x_n)\leq d(x_n, x_{n+1})$, then
$$[1- \alpha(x_{n-1}, x_n)]d(x_{n-1}, x_n)\leq[1- \alpha(x_{n-1},
x_n)]d(x_{n}, x_{n+1}),\eqno (3.5)$$ and from (3.3) and (3.4), we
obtain
$$\begin{array}{rcl}& &d(x_n, x_{n+1})<
\alpha(x_{n-1}, x_n)d(x_n, x_{n+1})+\varepsilon_{n}\\&\Rightarrow&
[1- \alpha(x_{n-1}, x_n)]d(x_{n}, x_{n+1})<\varepsilon_{n}.
\end{array}\eqno (3.6)$$
Combing (3.5) and (3.6), we obtain $[1- \alpha(x_{n-1},
x_n)]d(x_{n-1}, x_n)<\varepsilon_{n}$. This contradicts (3.2).So
$\max\{d(x_{n-1}, x_n), d(x_n, x_{n+1})\}\neq d(x_{n-1}, x_n)$.This
yields to $\max\{d(x_{n-1}, x_n), d(x_n, x_{n+1})\}=d(x_{n-1}, x_n)$
and $d(x_n, x_{n+1})<d(x_{n-1}, x_n)$. Also, from (3.3) and (3.4),
we obtain
$$\begin{array}{rcl}d(x_n, x_{n+1})&< &
\alpha(x_{n-1}, x_n)d(x_{n-1}, x_n)+\frac{1}{2^n}.
\end{array}\eqno (3.7)$$
When $n$ is even, we have
$$[1-\alpha(x_{n}, x_{n-1})]d(x_{n-1}, x_{n})\geq
\varepsilon_{n},\eqno (3.8)$$
$$\begin{array}{rcl}& &d(x_n, x_{n+1})=d( x_{n+1},x_n)\\&< &
d(Sx_n, x_{n})+\varepsilon_{n} \leq H(Sx_{n},
Tx_{n-1})+\varepsilon_{n} \\&\leq&\alpha( x_n,x_{n-1})M(x_{n},
x_{n-1})+\varepsilon_{n}\\&\leq&\alpha( x_n,x_{n-1})M(x_{n},
x_{n-1})+\frac{1}{2^n},
\end{array}\eqno (3.9)$$
$$\begin{array}{rcl} & &
M(x_{n}, x_{n-1})\\
& \leq& \max\{d(x_{n-1}, x_n), d(x_{n}, Sx_{n}), d(x_{n-1},
Tx_{n-1}),\\& & \frac{d(x_{n}, Tx_{n-1}) + d(x_{n-1},
Sx_{n})}{2}\}\\&\leq & \max\{d(x_{n-1}, x_n), d(x_n, x_{n+1}),
d(x_{n-1}, x_{n}),\\& & \frac{d(x_{n-1}, x_n) +d(x_{n},
Sx_{n})}{2}\}
\\& \leq& \max\{d(x_{n-1}, x_n), d(x_n, x_{n+1}), \frac{d(x_{n-1},
x_n) +d(x_{n}, x_{n+1})}{2}\}\\ & =& \max\{d(x_{n-1}, x_n), d(x_n,
x_{n+1})\}.
\end{array}\eqno (3.10)$$
From $(3.8)$, (3.9) and (3.10), in the same way as used above, we
can also obtain $d(x_n, x_{n+1})<d(x_{n-1}, x_n)$ and
$$\begin{array}{rcl}d(x_n, x_{n+1})&< &
\alpha(x_n, x_{n-1})d(x_{n-1}, x_n)+\frac{1}{2^n}.
\end{array}\eqno (3.11)$$
(4)
From (3), it is obvious that the sequence $\{d(x_{n}, x_{n+1})\}$ is
monotone decreasing. Hence (3.1) holds. So, there exists a
$\gamma<1$ such that $$\max\{\alpha(x_{2k-2},
x_{2k-1}),\alpha(x_{2k},
x_{2k-1})\}<\gamma$$ for all $k\in {\mathbb{N}}$. Therefore
using (3.7) and $(3.11)$ we can obtain (2.1). Thus $\{x_{n}\}$ is
convergent from Lemma 2.2.
Finally, let $\lim\limits_{n\rightarrow\infty}x_{n}=x^\ast$. Then,
since $\alpha$ is u.s.c. we have
$\limsup\limits_{n\rightarrow\infty}\alpha(x_{n},
x^\ast)\leq\alpha(x^\ast, x^\ast)<1$. Note that the approach we
produce the sequence $\{x_{n}\}$. By Lemma 2.3, $x^\ast\in Tx^\ast$.
This contradicts $Fix(T)=\emptyset$. So
$Fix(S)=Fix(T)\neq\emptyset$. The proof completes. $\square$
As an application we propose a proof of Theorem 1.3 from Theorem
3.1
as follows.
\noindent{\bf Proof of Theorem 1.3}. let
$$\begin{array}{rcl}\alpha(x, y)=
\left\{\begin{array}{l}1-\frac{\phi(x, y)}{d(x, y)}, d(x, y)\neq
0;\\0, d(x, y)=0,\end{array}\right.
\end{array}$$
and $S=T$. Then $S$ and $T$ are a duality of generalized weak
contractions. Let also $\{x_{n}\}$ be a sequence of $X$ with
$\{d(x_{n},x_{n+1})\}$ to be monotone decreasing. And assume
$\lim\limits_{n\rightarrow \infty}d(x_{n},x_{n+1})=r$.
If $r>0$, then $\lambda(r,d(x_{1},x_{2}))>0$ for $\phi$ is compactly
positive. On the other hand,
$\phi(x_{n},x_{n+1})\geq\lambda(r,d(x_{1},x_{2}))$ since
$r<d(x_{n},x_{n+1})\leq d(x_{1},x_{2})$. So we have
$$\alpha(x_{n},x_{n+1})=1-\frac{\phi(x_{n},x_{n+1})}{d(x_{n},x_{n+1})}\leq
1-\frac{\lambda(r,d(x_{1},x_{2}))}{d(x_{1},x_{2})}<1.$$ With the
same argument, $\alpha(x_{n+1},x_{n})\leq
1-\frac{\lambda(r,d(x_{1},x_{2}))}{d(x_{1},x_{2})}$. Hence (3.1)
holds. If $r=0$, then
$$\begin{array}{rcl}\limsup\limits_{n\rightarrow
\infty}\alpha(x_{n},x_{n+1})&=&\limsup\limits_{n\rightarrow
\infty}[1-\frac{\phi(x_{n},x_{n+1})}{d(x_{n},x_{n+1})}]\\&\leq&
\limsup\limits_{n\rightarrow
\infty}[1-\frac{\lambda(d(x_{n-1},x_{n}),
d(x_{n},x_{n+1}))}{d(x_{n},x_{n+1})}]\\&\leq&
1-\liminf\limits_{n\rightarrow
\infty}\frac{\lambda(d(x_{n-1},x_{n}),
d(x_{n},x_{n+1}))}{d(x_{n},x_{n+1})}\\&\leq&1-\liminf\limits_{\beta\rightarrow
\infty}\frac{\lambda(\alpha, \beta)}{\beta}<1.\end{array}$$ With the
same argument, $\limsup\limits_{n\rightarrow
\infty}\alpha(x_{n+1},x_{n})<1$. Hence (3.1) holds.
Let $\lim\limits_{n\rightarrow \infty}x_{n}=x^\ast$. Without loss of
generality, assume $d(x_{n}, x^\ast)\neq 0$ for all $
n\in{\mathbb{N}}$. Then
$$\begin{array}{rcl}& &\limsup\limits_{n\rightarrow\infty}\alpha(x_{n}, x^\ast)
=\limsup\limits_{n\rightarrow\infty}[1-\frac{\phi(x_{n},
x^\ast)}{d(x_{n}, x^\ast)}]\leq\\& &
1-\liminf\limits_{n\rightarrow\infty}\frac{\phi(x_{n},
x^\ast)}{d(x_{n}, x^\ast)}\leq 1-\liminf\limits_{\beta\rightarrow
0}\frac{\lambda(\alpha, \beta)}{\beta}<1.\end{array}$$
Combing the results above, by Theorem 3.1, $T$ has fixed point.
This ends the proof.
$\square$\\
| 4,060 | 16,866 |
en
|
train
|
0.4982.3
|
$$\begin{array}{rcl}d(x_n, x_{n+1})&< &
\alpha(x_n, x_{n-1})d(x_{n-1}, x_n)+\frac{1}{2^n}.
\end{array}\eqno (3.11)$$
(4)
From (3), it is obvious that the sequence $\{d(x_{n}, x_{n+1})\}$ is
monotone decreasing. Hence (3.1) holds. So, there exists a
$\gamma<1$ such that $$\max\{\alpha(x_{2k-2},
x_{2k-1}),\alpha(x_{2k},
x_{2k-1})\}<\gamma$$ for all $k\in {\mathbb{N}}$. Therefore
using (3.7) and $(3.11)$ we can obtain (2.1). Thus $\{x_{n}\}$ is
convergent from Lemma 2.2.
Finally, let $\lim\limits_{n\rightarrow\infty}x_{n}=x^\ast$. Then,
since $\alpha$ is u.s.c. we have
$\limsup\limits_{n\rightarrow\infty}\alpha(x_{n},
x^\ast)\leq\alpha(x^\ast, x^\ast)<1$. Note that the approach we
produce the sequence $\{x_{n}\}$. By Lemma 2.3, $x^\ast\in Tx^\ast$.
This contradicts $Fix(T)=\emptyset$. So
$Fix(S)=Fix(T)\neq\emptyset$. The proof completes. $\square$
As an application we propose a proof of Theorem 1.3 from Theorem
3.1
as follows.
\noindent{\bf Proof of Theorem 1.3}. let
$$\begin{array}{rcl}\alpha(x, y)=
\left\{\begin{array}{l}1-\frac{\phi(x, y)}{d(x, y)}, d(x, y)\neq
0;\\0, d(x, y)=0,\end{array}\right.
\end{array}$$
and $S=T$. Then $S$ and $T$ are a duality of generalized weak
contractions. Let also $\{x_{n}\}$ be a sequence of $X$ with
$\{d(x_{n},x_{n+1})\}$ to be monotone decreasing. And assume
$\lim\limits_{n\rightarrow \infty}d(x_{n},x_{n+1})=r$.
If $r>0$, then $\lambda(r,d(x_{1},x_{2}))>0$ for $\phi$ is compactly
positive. On the other hand,
$\phi(x_{n},x_{n+1})\geq\lambda(r,d(x_{1},x_{2}))$ since
$r<d(x_{n},x_{n+1})\leq d(x_{1},x_{2})$. So we have
$$\alpha(x_{n},x_{n+1})=1-\frac{\phi(x_{n},x_{n+1})}{d(x_{n},x_{n+1})}\leq
1-\frac{\lambda(r,d(x_{1},x_{2}))}{d(x_{1},x_{2})}<1.$$ With the
same argument, $\alpha(x_{n+1},x_{n})\leq
1-\frac{\lambda(r,d(x_{1},x_{2}))}{d(x_{1},x_{2})}$. Hence (3.1)
holds. If $r=0$, then
$$\begin{array}{rcl}\limsup\limits_{n\rightarrow
\infty}\alpha(x_{n},x_{n+1})&=&\limsup\limits_{n\rightarrow
\infty}[1-\frac{\phi(x_{n},x_{n+1})}{d(x_{n},x_{n+1})}]\\&\leq&
\limsup\limits_{n\rightarrow
\infty}[1-\frac{\lambda(d(x_{n-1},x_{n}),
d(x_{n},x_{n+1}))}{d(x_{n},x_{n+1})}]\\&\leq&
1-\liminf\limits_{n\rightarrow
\infty}\frac{\lambda(d(x_{n-1},x_{n}),
d(x_{n},x_{n+1}))}{d(x_{n},x_{n+1})}\\&\leq&1-\liminf\limits_{\beta\rightarrow
\infty}\frac{\lambda(\alpha, \beta)}{\beta}<1.\end{array}$$ With the
same argument, $\limsup\limits_{n\rightarrow
\infty}\alpha(x_{n+1},x_{n})<1$. Hence (3.1) holds.
Let $\lim\limits_{n\rightarrow \infty}x_{n}=x^\ast$. Without loss of
generality, assume $d(x_{n}, x^\ast)\neq 0$ for all $
n\in{\mathbb{N}}$. Then
$$\begin{array}{rcl}& &\limsup\limits_{n\rightarrow\infty}\alpha(x_{n}, x^\ast)
=\limsup\limits_{n\rightarrow\infty}[1-\frac{\phi(x_{n},
x^\ast)}{d(x_{n}, x^\ast)}]\leq\\& &
1-\liminf\limits_{n\rightarrow\infty}\frac{\phi(x_{n},
x^\ast)}{d(x_{n}, x^\ast)}\leq 1-\liminf\limits_{\beta\rightarrow
0}\frac{\lambda(\alpha, \beta)}{\beta}<1.\end{array}$$
Combing the results above, by Theorem 3.1, $T$ has fixed point.
This ends the proof.
$\square$\\
\noindent{\bf Theorem 3.2.} \textit{Let $(X, d)$ be a complete
metric space
and $S, T : \rightarrow CB(X)$ are a duality of generalized $\varphi$-weak
contractions that satisfies $\varphi$ is u.s.c. and
$$\limsup\limits_{t\rightarrow 0}\frac{\varphi(t)}{t}<1.\eqno (3.13)$$
Then $Fix(S)=Fix(T)\neq\emptyset$.}\\
\noindent{\bf Proof}. For any $(x, y)\in X\times X$, put
$$\begin{array}{rcl}\alpha(x, y)=
\left\{\begin{array}{l}\frac{\varphi(M(x, y))}{M(x, y)}, M(x, y)\neq
0;\\0, M(x, y)=0.\end{array}\right.
\end{array}$$
Then it can be easily verify that $H(Sx, Ty)\leq\alpha(x, y)M(x,
y)$. That is, $S, T : \rightarrow CB(X)$ are a duality of
generalized weak contractions with the $\alpha(x, y)$. Note that the
conditions $\alpha$ is u.s.c. and (3.1) are used only in the step
(4) of the proof of Theorem 3.1. We can easily know that the steps
(1), (2) and (3) can be used to prove Theorem 3.2. So the proof can
be accomplished by proposing the step $(4)'$ below.
\noindent $(4)'$ $\forall n\in{\mathbb{N}}$, assume first that $n$
is odd. Note that $0<d(x_{n-1}, x_{n})\leq M(x_{n-1}, x_{n})$ and
$$\max\{d(x_{n-1}, x_{n}), d(x_n, x_{n+1})\}=d(x_{n-1}, x_{n}).$$ By
(3.4), we have $M(x_{n-1}, x_{n})=d(x_{n-1}, x_{n})>0$. This leads
to
$$\alpha(x_{n-1}, x_{n})=\frac{\varphi(d(x_{n-1}, x_{n}))}{d(x_{n-1},
x_{n})}.\eqno (3.14)$$ Further, from (3.7), we obtain $$d(x_{n},
x_{n+1})\leq \varphi(d(x_{n-1}, x_{n}))+\frac{1}{2^n}.\eqno (3.15)$$
When $n$ is even, with the same argument, we have (3.15) and
$$\alpha(x_{n}, x_{n-1})=\frac{\varphi(d(x_{n-1},
x_{n}))}{d(x_{n-1}, x_{n})}.\eqno (3.16).$$ Since the sequence
$\{d(x_{n}, x_{n+1})\}$ is monotone decreasing and bounded below, it
is convergent. Let $\lim\limits_{n\rightarrow\infty}d(x_n,
x_{n+1})=r$. For $\varphi$ is u.s.c. using (3.15) we have $r\leq
\varphi(r)$. This implies $r=0$ because $\varphi(t)<t$ for all
$t>0$. Therefore, according to (3.14) and (3.16), we respectively
have
$$\limsup\limits_{k\rightarrow\infty}\alpha(x_{2k-2}, x_{2k-1})=
\limsup\limits_{k\rightarrow\infty}\frac{\varphi(x_{2k-2},
x_{2k-1})}{d(x_{2k-2}, x_{2k-1})}\leq \limsup\limits_{t\rightarrow
0}\frac{\varphi(t)}{t}<1.$$
$$\limsup\limits_{k\rightarrow\infty}\alpha(x_{2k}, x_{2k-1})=
\limsup\limits_{k\rightarrow\infty}\frac{\varphi(x_{2k},
x_{2k-1})}{d(x_{2k}, x_{2k-1})}\leq \limsup\limits_{t\rightarrow
0}\frac{\varphi(t)}{t}<1.$$Hence (3.1) holds.
Using
(3.7) and $(3.11)$ we obtain (2.1). Thus $x_{n}$ is convergent from Lemma 2.2.
Finally, let $\lim\limits_{n\rightarrow\infty}x_{n}=x^\ast$. Then
for each even $n$, we have (2.7). This reduces to
$\limsup\limits_{k\rightarrow\infty}M(x_{2k}, x^\ast)\leq d(x^\ast,
Tx^\ast)$. So there exists a positive number $b$ such that
$M(x_{2k}, x^\ast)\leq b$. For $\varphi$ is u.s.c. and
$\limsup\limits_{t\rightarrow 0}\frac{\varphi(t)}{t}<1$,
$\sup\{\frac{\varphi(t)}{t}|t\in(0, b]\}<1$. Therefore,
$\limsup\limits_{n\rightarrow\infty}\alpha(x_{n},
x^\ast)=\limsup\limits_{n\rightarrow\infty}\frac{\varphi(M(x_{n},
x^\ast))}{M(x_{n}, x^\ast)}\leq\sup\{\frac{\varphi(t)}{t}|t\in(0,
b]\}<1$. By Lemma 2.3, $x^\ast\in Tx^\ast$. This contradicts
$Fix(T)=\emptyset$. So $Fix(S)=Fix(T)\neq\emptyset$. The proof ends.
$\square$
Theorem 3.2 extends Theorem 4.1 of Rouhani and Moradi [4] by
allowing both two mappings $S$ and $T$ to be multi-valued. However,
we add the condition (3.1). Whether Theorem 3.2 holds or not without
the condition (3.1) is a topic for us to further pursue.
| 2,864 | 16,866 |
en
|
train
|
0.4982.4
|
\section{Endpoint theory} \label{1}Now we turn to address the endpoint theory.
In terms of Theorem 3.1 (Theorem 3.2) and Lemma 2.4, we can
immediately get the next corollary.\\
{\bf corollary 3.2.} \textit{Under the conditions of Theorem 3.1 (
Theorem 3.2), if either $S$ or $T$ is single valued, then there
exists a unique common fixed point for $S$ and $T$, which is also
a unique common endpoint of theirs.}\\
For $S$ and $T$ are all multi-valued, we have Theorem 4.1 and
Theorem 4.2 below.\\
{\bf Theorem 4.1.} \textit{Let $(X, d)$ be a complete metric space
and $S, T : X\rightarrow CB(X)$ are a duality of generalized weak
contractions that satisfies $\alpha$ is u.s.c. and
$$\limsup\limits_{n, m\rightarrow\infty}\alpha(x_n,x_m)<1\eqno (4.1)$$
if $\lim\limits_{n,
m\rightarrow\infty}d(x_n,x_m)[1-\alpha(x_n,x_m)]=0$. Then $S$ and
$T$ have a unique common endpoint if they have the approximate
endpoint property.}\\
\noindent{\bf Proof}. Suppose that $S$ and $T$ have the approximate
endpoint property. Then there exists a sequence $\{x_n\}$ such that
$$\lim\limits_{n\rightarrow \infty}[H(\{x_n\}, Sx_n)+ H(\{x_n\},
Tx_n)]= 0.$$
For all $m, n \in {\mathbb{N}}$, we have
$$\begin{array}{rcl}& & M(x_n, x_m)\\ &=& \max\{d(x_n, x_m), d(x_n, Sx_n), d(x_m,
Tx_m), \\& &\frac{d(x_n, Tx_m) + d(x_m, Sx_n)}{ 2 }\}\\& \leq &
\max\{d(x_n, x_m), H(\{x_n\}, Sx_n), H(\{x_m\}, Tx_m),\\& &
\frac{d(x_n, x_m)+H(\{x_m\}, Tx_m) +d(x_n, x_m)+ H(\{x_n\},
Sx_n)}{2}\} \\&\leq & d(x_n, x_m) + H(\{x_n\}, Sx_n) + H(\{x_m\},
Tx_m). \end{array}\eqno (4.2)$$ Note that $d(x_n, x_m)\leq
H(\{x_n\}, Sx_n) +H(Sx_n, Tx_m)+ H(\{x_m\}, Tx_m)$. From (4.2), we
further have
$$\begin{array}{rcl}& & M(x_n, x_m)\\&\leq &d(x_n, x_m)
-H(\{x_n\}, Sx_n) - H(\{x_m\}, Tx_m)\\& & + 2H(\{x_n\}, Sx_n) +
2H(\{x_m\}, Tx_m)\\ &\leq & H(Tx_n, Tx_m) + 2H(\{x_n\}, Sx_n) +
2H(\{x_m\}, Tx_m). \end{array}\eqno (4.3)$$ This reduces to
$$\begin{array}{rcl}& & M(x_n, x_m)\\&\leq & \alpha(x_n,
x_m) M(x_n, x_m) + 2H(\{x_n\}, Sx_n) + 2H(\{x_m\},
Tx_m).\end{array}\eqno (4.4)$$ Note that $d(x_n, x_m)\leq M(x_n,
x_m)$. Using (4.4) we obtain
$$\begin{array}{rcl}& & d(x_n, x_m)[1-\alpha(x_n,
x_m)]\\&\leq & M(x_n, x_m)[1-\alpha(x_n, x_m)]\\&\leq & 2H(\{x_n\},
Sx_n) + 2H(\{x_m\}, Tx_m)\\&\Rightarrow &\lim\limits_{
n,m\rightarrow \infty}d(x_n, x_m)[1-\alpha(x_n, x_m)]=
0.\end{array}\eqno (4.5)$$ For (4.5), we have (4.1). Using also
(4.4), we obtain
$$\limsup\limits_{
n,m\rightarrow \infty}M(x_n, x_m)\leq[\limsup\limits_{
n,m\rightarrow \infty}\alpha(x_n, x_m)]\limsup\limits_{
n,m\rightarrow \infty}M(x_n, x_m).$$ By (4.1), this yields to
$\limsup\limits_{ n,m\rightarrow \infty}M(x_n, x_m)=0$. Hence
$\limsup\limits_{ n,m\rightarrow \infty}d(x_n, x_m)=0$, i.e.
$\{x_n\}$ is a Cauchy sequence.
Let $\lim\limits_{ n\rightarrow \infty}x_n=x^\ast$. For all $ n \in
{\mathbb{N}}$, we have
$$\begin{array}{rcl}& &H(\{x_n\}, Tx^\ast)- H(\{x_n\}, Sx_n)\\&\leq& H(Sx_n, Tx^\ast)
\leq \alpha(x_n, x^\ast)M(x_n, x^\ast)\\& =&\alpha(x_n,
x^\ast)\max\{d(x_n, x^\ast), d(x_n, Sx_n), d(x^\ast, Tx^\ast),\\&
&\frac{d(x_n, Tx^\ast)+d(x^\ast, Sx_n)}{2}\}\\ &\leq&\alpha(x_n,
x^\ast)\max\{d(x_n, x^\ast), d(x_n, Sx_n), d(x^\ast, Tx^\ast),\\&
&\frac{d(x_n, x^\ast)+d(x^\ast, Tx^\ast)+d(x^\ast, x_n)+d(x_n,
Sx_n)}{2}\}.
\end{array}$$Noting also $\alpha$ is u.s.c. we obtain
$$H(\{x^\ast\}, Tx^\ast)\leq\alpha(x^\ast,
x^\ast)d(x^\ast, Tx^\ast)\leq\alpha(x^\ast, x^\ast)H(\{x^\ast\},
Tx^\ast).$$ For $\alpha(x^\ast, x^\ast)<1 $, we conclude that
$H({x^\ast}, Tx^\ast)=0$. This means $Tx^\ast = \{x^\ast\}$.
Finally, the uniqueness of the endpoint is concluded from Lemma
2.4. $\square$
The following Theorem 4.2 is our final result, which extends the
Theorem 2.1 of Moradi and Khojasteh [7] to the case where both two
mappings are multi-valued.\\
{\bf Theorem 4.2.} \textit{Let $(X, d)$ be a complete metric space
and $S, T : \rightarrow CB(X)$ are a duality of $\varphi$-generalized weak
contractions that satisfies $\varphi$ is s.u.c. and
$$\liminf\limits_{t\rightarrow \infty}[t-\varphi(t)]>0.\eqno (4.6)$$
Then $S$ and $T$ have a unique common endpoint if they have the
approximate endpoint property.}\\
\noindent{\bf Proof}. Suppose that $S$ and $T$ have the approximate
endpoint property. Then there exists a sequence $\{x_n\}$ such that
$$\lim\limits_{n\rightarrow \infty}[H(\{x_n\}, Sx_n)+ H(\{x_n\},
Tx_n)]= 0,$$as well as (4.2) and (4.3) hold.
By (4.3) we have
$$\begin{array}{rcl}& & M(x_n, x_m)\\&\leq & \varphi( M(x_n, x_m)) + 2H(\{x_n\}, Sx_n) + 2H(\{x_m\},
Tx_m).\end{array}\eqno (4.7)$$ If $\limsup\limits_{ n,m\rightarrow
\infty}M(x_n, x_m)=+\infty$, then
$$\begin{array}{rcl}\liminf\limits_{
t\rightarrow \infty}[t-\varphi(t)]\leq\liminf\limits_{
n,m\rightarrow \infty}[M(x_n, x_m)-\varphi(M(x_n, x_m))]\leq
0.\end{array}$$ This contradicts (4.6). So $\limsup\limits_{
n,m\rightarrow \infty}M(x_n, x_m)<+\infty$. Noting also $\varphi(t)$
is u.s.c. and using (4.7), we obtain
$$\limsup\limits_{
n,m\rightarrow \infty}M(x_n, x_m)\leq\limsup\limits_{ n,m\rightarrow
\infty}\varphi(M(x_n, x_m))\leq\varphi(\limsup\limits_{
n,m\rightarrow \infty}M(x_n, x_m)).$$ Note that $\limsup\limits_{
n,m\rightarrow \infty}M(x_n, x_m)<+\infty$ and $\varphi(t)<t$ for
all $t>0$. This implies
$\limsup\limits_{ n,m\rightarrow
\infty}M(x_n, x_m)=0$. Thus $\{x_n\}$ is Cauchy sequence.
Let $\lim\limits_{ n\rightarrow \infty}x_n=x^\ast$. For all $ n \in
{\mathbb{N}}$, we have
$$\begin{array}{rcl}& &H(\{x_n\}, Tx^\ast)- H(\{x_n\}, Sx_n)\\&\leq& H(Sx_n, Tx^\ast)
\leq \varphi(M(x_n, x^\ast)).
\end{array}$$ This reduces to
$$H(\{x^\ast\}, Tx^\ast)\leq \limsup\limits_{ n,m\rightarrow \infty}\varphi(M(x_n,
x^\ast))\leq\varphi(\limsup\limits_{ n,m\rightarrow \infty}M(x_n,
x^\ast)). \eqno(4.8)$$On the other hand,
$$\begin{array}{rcl}M(x_n, x^\ast) &\leq&\max\{d(x_n,
x^\ast), d(x_n, Sx_n), d(x^\ast, Tx^\ast),\\& &\frac{d(x_n,
x^\ast)+d(x^\ast, Tx^\ast)+d(x^\ast, x_n)+d(x_n, Sx_n)}{2}\}.
\end{array}\eqno(4.9)$$
If $H(\{x^\ast\}, Tx^\ast)\neq 0$, from (4.8) and (4.9), we have
$$\begin{array}{rcl}& &H(\{x^\ast\}, Tx^\ast)<\limsup\limits_{ n,m\rightarrow \infty}M(x_n,
x_m)\leq d(x^\ast, Tx^\ast)\leq H(\{x^\ast\}, Tx^\ast).\end{array}$$
This contradiction shows $H(\{x^\ast\}, Tx^\ast)=0$. That is,
$Tx^\ast = \{x^\ast\}$. Finally, the uniqueness of the endpoint is
concluded from Lemma 2.4. $\square$
\noindent{\bf Remark 4.3.} By taking $S=T$, we
can immediately obtain the Theorem 2.1 of Moradi and Khojasteh [7]
from Theorem 4.2 and Lemma
2.4.\\
{\bf Acknowledgements}
The author cordially thank the anonymous referees for their valuable
comments which lead to the improvement of this paper.
\end{document}
| 3,130 | 16,866 |
en
|
train
|
0.4983.0
|
\begin{document}
\begin{center}
{\Large \bf{On Warped Product Gradient Yamabe Soliton}}
\textbf{\bf{Tokura, W. I.$^1$, Adriano, L. R.$^2$}, Pina R. S.$^3$.}\\
\textbf{\footnotesize \textit{ Instituto de Matemática e Estatística-UFG}}\\
\textbf{\footnotesize \textit{$^1$email:
[email protected]}} \textbf{\footnotesize
\textit{$^2$email: [email protected]}} \textbf{\footnotesize
\textit{$^3$email: [email protected]}}
{\Large}
\end{center}
\begin{abstract}
In this paper, we provide a necessary and sufficient conditions for
the warped product $M=B\times_{f}F$ to be a gradient Yamabe soliton
when the base is conformal to an $n$-dimensional pseudo-Euclidean
space, which are invariant under the action of an
$(n-1)$-dimensional translation group, and the fiber $F$ is
scalar-constant. As application, we obtain solutions in steady case
with fiber scalar-flat. Besides, on the warped product we consider
the potential function as separable variables and obtain some
characterization of the base and the fiber.
\end{abstract}
| 335 | 26,288 |
en
|
train
|
0.4983.1
|
\section{Introduction and main statements}
A \textit{Yamabe soliton} is a pseudo-Riemannian manifold $(M,g)$
admitting a vector field \linebreak$X\in \mathfrak{X}(M)$ such that
\begin{equation}\label{eq:01}(S_{g}-\rho)g=\frac{1}{2}\mathfrak{L}_{X}g,
\end{equation}
where $S_{g}$ denotes the scalar curvature of $M$, $\rho$ is a real
number and $\mathfrak{L}_{X}g$ denotes the Lie derivative of the
metric $g$ with respect to $X$. We say that $(M^{n},g)$ is
shrinking, steady or expanding, if $\rho>0$, $\rho=0$ , $\rho<0$,
respectively. When $X=\nabla h$ for some smooth function $h\in
\mathbf{C^{\infty}}(M)$, we say that $(M^{n},g,\nabla h)$ is an
\textit{gradient Yamabe soliton} with potential function $h$. In
this case the equation \eqref{eq:01} turns out
\begin{equation}\label{eq8}(S_{g}-\rho)g=Hess(h),
\end{equation}
where $Hess(\phi)$ denote the hessian of $\phi$. When $\phi$ is constant, we
call it a \textit{trivial Yamabe soliton}.
Yamabe solitons are self-similar solutions for the Yamabe flow
$$
\frac{\partial}{\partial t}g(t)=-R_{g(t)}g(t),
$$
and are important to
understand the geometric flow since they can appear as singularity
models. It has been known that every compact gradient Yamabe soliton
is of constant scalar curvature, hence, trivial since $f$ is
harmonic, see \cite{Cho}, \cite{Dask}, \cite{Hsu}. For the
non-compact case many interesting results is obtained in \cite{Cao}, \cite{Cat}, \cite{Ma}, \cite{Ma1}, \cite{Wu}.
As pointed in \cite{Cal} and \cite{Lopes}, it is important to
emphasize here that although the Yamabe flow are wellposed in the
Riemannian setting, they do not necessarily exist in the
semi-Riemannian case, where even the existence of short-time
solutions is not guaranteed in general due to the lack of
parabolicity. However, the existence of self-similar solutions of
the flow is equivalent to the existence of Yamabe solitons as in
\eqref{eq:01}. Semi-Riemannian Yamabe solitons have been intensively
studied, showing many differences with respect to the Riemannian
case, see for instance \cite{Bat} and \cite{Cal}.
Brozos-Vázquez et al. in \cite{Bro}, obtain a local characterization
of pseudo Riemannian ma\-ni\-fold endowed with gradient Yamabe soliton
metric, its results establish that if a pseudo Riemannian gradient
Yamabe soliton $(M,g)$, with potential function $h$ and such that
\linebreak$|\nabla h|\neq0$ is locally isometric to warped product of
unidimensional base and scalar-constant fiber. In the Riemannian
context a global structure result was given in \cite{Cao}.
In \cite{Dask} Daskalopoulos and Sesum investigated gradient Yamabe
soliton and proved that all complete locally conformally flat
gradient Yamabe solitons with positive sectional curvature are
rotationally symmetric. Proceeding in the same locally conformally
flat context, Neto and Tenenblat in \cite{Bene} consider the study of
pseudo Riemannian manifold
$(\mathbb{R}^{n},\frac{1}{\varphi^{}2}g_{0})$, where $g_{0}$ is the
canonical pseudo metric, and obtain a necessary and sufficient
condition to this manifold be a gradient Yamabe soliton. In the
search for invariant solutions they consider the invariant action of
an $(n-1)$-dimensional translation group and exhibit a complete
solution in steady case.
Recently, Pina and De Sousa in \cite{Pina}, consider the study of
gradient Ricci solitons on warped product structure
$M=B^{n}\times_{f}F^{m}$, where the base is conformal to an $n$-dimensional
pseudo-Euclidean space, invariant under the action of an
$(n-1)$-dimensional translation group, the fiber chosen to be an
Einstein manifold and potential function $h$ depending only on the
base and they give a necessary an sufficient condiction for $M$ to be a
gradient Ricci soliton.
As far as we know, there are no results for gradient Yamabe solitons
related to its potential function in the warped products of two
Riemannian manifolds of arbitrary dimensions. Thus, in this paper we
consider the study of gradient Yamabe solitons with warped product
structure, where we choose the fiber with dimension greater than
$1$. Initially we provide a sufficient condiction for the potential
function on warped product depends only on the base.
\begin{proposition}\label{eq3}Let $M=B\times_{f}F$ be a warped product
manifold with metric $\tilde{g}$. If the metric
$\tilde{g}=g_{B}\oplus f^{2}g_{F}$ is a gradient Yamabe soliton with
potential function $h:M\rightarrow \mathbb{R}$ and there exist a
pair of orthogonal vectors $(X_i,X_{j})$ of the base $B$, such that
$Hess_{g_{B}}(f)(X_{i},X_{j})\neq 0$, then the potential function
$h$ depends only on the base.
\end{proposition}
\begin{observation}
The previous proposition extend for Yamabe solitons the result obtained in \cite{Kim} where the authors studied warped product gradient Ricci solitons with one-dimensional base.
\end{observation}
Motivated by natural extension of Ricci solitons given by Rigoli,
Pigola and Setti in \cite{Pigola} the authors Barbosa and Ribeiro in
\cite{Barbosa}, define the concept of \textit{Almost Yamabe soliton}
allowing the constant $\rho$ in definition of Yamabe soliton
\eqref{eq:01} to be a differentiable function on $M$. The following
example was obtained by Barbosa and Ribeiro in \cite{Barbosa}, where
the manifold is endowed with a warped product metric.
\begin{example}Let $M^{n+1}=\mathbb{R}\times_{\cosh t}\mathbb{S}^{n}$
with metric $g=dt^{2}+\cosh^{2} t g_{0}$, where $g_{0}$ is the
canonical metric of $\mathbb{S}^{n}$. Taking $(M^{n+1},g,\nabla h,\rho)$, where $h(t,x)=\sinh t$ and \newline$\rho(t,x)=\sinh t+n$. A
straightforward computation gives that
$M^{n+1}=\mathbb{R}\times_{\cosh t}\mathbb{S}^{n}$ is a noncompact almost gradient Yamabe soliton.
\end{example}
In what follows, inspired by Proposition \ref{eq3}, we consider a
warped product gradient Yamabe soliton $M=B\times_{f}F$ with
potential function $h$ splitting of the form
\begin{equation}
\label{h12}
h(x,y)=h_{1}(x)+h_{2}(y),\ \mbox{where}\ h_{1}\in\mathcal{C}^{\infty}(B)\
\mbox{and}\ h_{2}\in\mathcal{C}^{\infty}(F),
\end{equation}
and get the following characterization theorem.
\begin{theorem}
\label{theo1.1}Let $M=B\times_{f}F$ be a warped product manifold with metric $\tilde{g}=g_{B}\oplus
f^{2}g_{F}$, and gradient Yamabe soliton structure with potential
function $h:B\times F\rightarrow\mathbb{R}$ given by \eqref{h12}, then one of the following
cases occurs
\begin{description}
\item[(a)]$M$ is the Riemannian product between a \textbf{trivial gradiente Yamabe soliton} and a \textbf{gradient Yamabe soliton}.
\item[(b)]$M$ is the Riemannian product between two \textbf{gradient Yamabe solitons}.
\item[(c)]$M$ is the warped product between a \textbf{Almost gradient Yamabe solitons} and a \textbf{trivial gradiente Yamabe soliton}.
\end{description}
\end{theorem}
This characterization theorem shows us that if we take the potential
function depending only on the base then the fiber $F$ is of
constant scalar curvature. In what follows we will take a warped
product gradient Yamabe soliton with potential function of the form
$h(x,y)=h_{1}(x)+constant$, the base conformal to an $n$-dimensional
pseudo-Euclidean space, and the fiber chosen to be an
scalar-constant space. More precisely, let $(\mathbb{R}^{n},g)$ be
the pseudo-Euclidean space, $n\geq3$ with coordinates
$x=(x_{1},\dots,x_{n})$ and $g_{ij}=\delta_{ij}\epsilon_{i}$ and let
$M=(\mathbb{R}^{n},\bar{g})\times_{f}F^{m}$ be a warped product
where $\bar{g}=\frac{1}{\varphi^{2}}g$, $F$ a semi-Riemannian
scalar-constant manifold with curvature $\lambda_{F}$, $m\geq1$,
$f$,$\varphi$, $h:\mathbb{R}^{n}\rightarrow\mathbb{R}$, smooth
functions, and $f$ is a positive function. Then we obtain necessary
and sufficient conditions for the warped product metric
$g_{B}\oplus f^{2}g_{F}$ to be a gradient Yamabe soliton.
\begin{theorem}\label{eq:02}Let $(\mathbb{R}^{n},g)$ be a
pseudo-Euclidean space, $n\geq3$ with coordinates \newline$x=(x_{1},\dots,x_{n})$ and $g_{ij}=\delta_{ij}\epsilon_{i}$, and
let $M=(\mathbb{R}^{n},\bar{g})\times_{f}F^{m}$ be a warped product
where $\bar{g}=\frac{1}{\varphi^{2}}g$, $F$ a semi-Riemannian
scalar-constant manifold with curvature $\lambda_{F}$, $m\geq1$,
$f$,$\varphi$, $h:\mathbb{R}^{n}\rightarrow\mathbb{R}$, smooth
functions, and $f$ is a positive function. Then the warped product
metric $\tilde{g}$ is a gradient Yamabe soliton with potential
function $h$ if, and only if, the functions $f, \varphi, h$
satisfy
\begin{equation}\label{eq:19}h_{,x_{i}x_{j}}+\frac{\varphi_{,x_{j}}}{\varphi}h_{,x_{i}}+\frac{\varphi_{,x_{i}}}{\varphi}h_{,x_{j}}=0\hspace{1cm}
i\neq j,
\end{equation}
\begin{equation}\label{eq:20}
\begin{split}
\Big{[}(n-1)\left(2\varphi\sum_{k}\varepsilon_{k}\varphi_{,x_{k}x_{k}}-n\sum_{k}\varepsilon_{k}\varphi_{,x_{k}}^{2}\right)+\frac{\lambda_{F}}{f^{2}}-\frac{2m}{f}\left(\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}x_{k}}-(n-2)\varphi\sum_{k}\varepsilon_{k}\varphi_{,x_{k}}f_{,x_{k}}\right)+\\
-\frac{m(m-1)}{f^2}\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}}^{2}-\rho\Big{]}\frac{\varepsilon_{i}}{\varphi^{2}}=h_{,x_{i}x_{i}}+2\frac{\varphi_{,x_{i}}}{\varphi}h_{,x_{i}}-\varepsilon_{i}\sum_{k}\varepsilon_{k}\frac{\varphi_{,x_{k}}}{\varphi}h_{,x_{k}}\hspace{1cm}i=j,
\end{split}
\end{equation}
\begin{equation}\label{eq:21}
\begin{split}
(n-1)\left(2\varphi\sum_{k}\varepsilon_{k}\varphi_{,x_{k}x_{k}}-n\sum_{k}\varepsilon_{k}\varphi_{,x_{k}}^{2}\right)+\frac{\lambda_{F}}{f^{2}}-\frac{2m}{f}\left(\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}x_{k}}-(n-2)\varphi\sum_{k}\varepsilon_{k}\varphi_{,x_{k}}f_{,x_{k}}\right)+\\
-\frac{m(m-1)}{f^2}\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}}^{2}-\rho=\frac{\varphi^{2}}{f}\sum_{k}\varepsilon_{k}f_{,x_{k}}h_{,x_{k}}.
\end{split}
\end{equation}
\end{theorem}
In order to obtain solutions for equations in Theorem \ref{eq:02},
we consider $f$, $\varphi$ and $h$ invariant under the action of an
$(n-1)$-dimensional translation group, and
$\xi=\sum_{i=1}^{n}\alpha_{i}x_{i}, \alpha_{i}\in\mathbb{R}$, be a
basic invariant for the $(n-1)$-dimensional translation group, then
we obtain
\begin{theorem}\label{eq4}Let $(\mathbb{R}^{n},g)$ be a pseudo-Euclidean space, $n\geq3$ with coordinates \newline$x=(x_{1},\dots,x_{n})$, $g_{ij}=\delta_{ij}\epsilon_{i}$ and
let $M=(\mathbb{R}^{n},\bar{g})\times_{f}F^{m}$ be a warped product
where $\bar{g}=\frac{1}{\varphi^{2}}g$, $F$ a semi-Riemannian
scalar-constant manifold with curvature $\lambda_{F}$, $m\geq1$,
$f$,$\varphi$, $h:\mathbb{R}^{n}\rightarrow\mathbb{R}$, smooth
functions and $f>0$. Consider the functions $f(\xi)$,
$\varphi(\xi)$ and $h(\xi)$, where \newline$\xi=\sum_{k=1}^{n}\alpha_{k}x_{k},\alpha_{k}\in\mathbb{R}$ and
$\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=\varepsilon_{k_{0}}$ or
$\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=0$. Then the warped
product metric $\tilde{g}$ is a gradient Yamabe soliton with
potential function $h$ if, and only if, $f$, $h$ and $\varphi$,
satisfy
\begin{equation}\label{eq:09}
h''+2\frac{\varphi'h'}{\varphi}=0
\end{equation}
\begin{equation}\label{eq1}
\begin{split}
\varepsilon_{k_{0}}[(n-1)(2\varphi\varphi''-n(\varphi')^{2})-2\frac{m}{f}(\varphi^{2}f''-(n-2)\varphi\varphi'f')-&\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}+
\varphi'h'\varphi]\\&=\rho-\frac{\lambda_{F}}{f^{2}}
\end{split}
\end{equation}
\begin{equation}\label{eq2}
\begin{split}
\varepsilon_{k_{0}}[(n-1)(2\varphi\varphi''-n(\varphi')^{2})-2\frac{m}{f}(\varphi^{2}f''-(n-2)\varphi\varphi'f')-&\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}-
\frac{\varphi^{2}}{f}f'h']\\&=\rho-\frac{\lambda_{F}}{f^{2}}
\end{split}
\end{equation}
when
$\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=\varepsilon_{k_{0}}$.
And
| 4,052 | 26,288 |
en
|
train
|
0.4983.2
|
\begin{equation}\label{eq:19}h_{,x_{i}x_{j}}+\frac{\varphi_{,x_{j}}}{\varphi}h_{,x_{i}}+\frac{\varphi_{,x_{i}}}{\varphi}h_{,x_{j}}=0\hspace{1cm}
i\neq j,
\end{equation}
\begin{equation}\label{eq:20}
\begin{split}
\Big{[}(n-1)\left(2\varphi\sum_{k}\varepsilon_{k}\varphi_{,x_{k}x_{k}}-n\sum_{k}\varepsilon_{k}\varphi_{,x_{k}}^{2}\right)+\frac{\lambda_{F}}{f^{2}}-\frac{2m}{f}\left(\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}x_{k}}-(n-2)\varphi\sum_{k}\varepsilon_{k}\varphi_{,x_{k}}f_{,x_{k}}\right)+\\
-\frac{m(m-1)}{f^2}\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}}^{2}-\rho\Big{]}\frac{\varepsilon_{i}}{\varphi^{2}}=h_{,x_{i}x_{i}}+2\frac{\varphi_{,x_{i}}}{\varphi}h_{,x_{i}}-\varepsilon_{i}\sum_{k}\varepsilon_{k}\frac{\varphi_{,x_{k}}}{\varphi}h_{,x_{k}}\hspace{1cm}i=j,
\end{split}
\end{equation}
\begin{equation}\label{eq:21}
\begin{split}
(n-1)\left(2\varphi\sum_{k}\varepsilon_{k}\varphi_{,x_{k}x_{k}}-n\sum_{k}\varepsilon_{k}\varphi_{,x_{k}}^{2}\right)+\frac{\lambda_{F}}{f^{2}}-\frac{2m}{f}\left(\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}x_{k}}-(n-2)\varphi\sum_{k}\varepsilon_{k}\varphi_{,x_{k}}f_{,x_{k}}\right)+\\
-\frac{m(m-1)}{f^2}\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}}^{2}-\rho=\frac{\varphi^{2}}{f}\sum_{k}\varepsilon_{k}f_{,x_{k}}h_{,x_{k}}.
\end{split}
\end{equation}
\end{theorem}
In order to obtain solutions for equations in Theorem \ref{eq:02},
we consider $f$, $\varphi$ and $h$ invariant under the action of an
$(n-1)$-dimensional translation group, and
$\xi=\sum_{i=1}^{n}\alpha_{i}x_{i}, \alpha_{i}\in\mathbb{R}$, be a
basic invariant for the $(n-1)$-dimensional translation group, then
we obtain
\begin{theorem}\label{eq4}Let $(\mathbb{R}^{n},g)$ be a pseudo-Euclidean space, $n\geq3$ with coordinates \newline$x=(x_{1},\dots,x_{n})$, $g_{ij}=\delta_{ij}\epsilon_{i}$ and
let $M=(\mathbb{R}^{n},\bar{g})\times_{f}F^{m}$ be a warped product
where $\bar{g}=\frac{1}{\varphi^{2}}g$, $F$ a semi-Riemannian
scalar-constant manifold with curvature $\lambda_{F}$, $m\geq1$,
$f$,$\varphi$, $h:\mathbb{R}^{n}\rightarrow\mathbb{R}$, smooth
functions and $f>0$. Consider the functions $f(\xi)$,
$\varphi(\xi)$ and $h(\xi)$, where \newline$\xi=\sum_{k=1}^{n}\alpha_{k}x_{k},\alpha_{k}\in\mathbb{R}$ and
$\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=\varepsilon_{k_{0}}$ or
$\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=0$. Then the warped
product metric $\tilde{g}$ is a gradient Yamabe soliton with
potential function $h$ if, and only if, $f$, $h$ and $\varphi$,
satisfy
\begin{equation}\label{eq:09}
h''+2\frac{\varphi'h'}{\varphi}=0
\end{equation}
\begin{equation}\label{eq1}
\begin{split}
\varepsilon_{k_{0}}[(n-1)(2\varphi\varphi''-n(\varphi')^{2})-2\frac{m}{f}(\varphi^{2}f''-(n-2)\varphi\varphi'f')-&\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}+
\varphi'h'\varphi]\\&=\rho-\frac{\lambda_{F}}{f^{2}}
\end{split}
\end{equation}
\begin{equation}\label{eq2}
\begin{split}
\varepsilon_{k_{0}}[(n-1)(2\varphi\varphi''-n(\varphi')^{2})-2\frac{m}{f}(\varphi^{2}f''-(n-2)\varphi\varphi'f')-&\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}-
\frac{\varphi^{2}}{f}f'h']\\&=\rho-\frac{\lambda_{F}}{f^{2}}
\end{split}
\end{equation}
when
$\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=\varepsilon_{k_{0}}$.
And
\begin{equation}\label{eq:10}
h''+2\frac{\varphi'h'}{\varphi}=0
\end{equation}
\begin{equation}\label{eq10}
\rho-\frac{\lambda_{F}}{f^{2}}=0
\end{equation}
when $\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=0.$
\end{theorem}
It is interesting to know how geometry of the fiber manifold $F$
affects the geometry of the warped product
$M=(\mathbb{R}^{n},\bar{g})\times_{f}F^{m}$. He in \cite{He0} has
shown that any complete steady gradient Yamabe soliton on
$\mathbb{R}\times_{f}F$ is necessarily isometric to the Riemannian
product with constant $f$ and $F$ being of zero scalar curvature.
Moreover, he showed that there is no complete steady gradient Yamabe
soliton on $\mathbb{R}\times_{f}F^{n}$ with $n\geq2$ and $F$
positive scalar constant manifold.
As consequence of Theorem \ref{eq4} in the context of lightlike
vector invariance and scalar-constant fiber, we prove that if $F$
has positive scalar constant curvature then there is no shirinking
or steady gradient Yamabe soliton
$M=(\mathbb{R}^{n},\bar{g})\times_{f}F^{m}$ and when $F$ has
negative constant scalar curvature, there is no expanding or steady
gradient Yamabe soliton \newline$M=(\mathbb{R}^{n},\bar{g})\times_{f}F^{m}$,
this is translated into the following corollary.
\begin{cor}\label{cor1.7}In the context of Theorem \ref{eq4}, if $X=\sum_{k}\alpha_{k}\frac{\partial}{\partial_{k}}$ is a lightlike vector, assume that $\lambda_{F}>0$, then there is no expanding or
steady gradient Yamabe soliton with warped metric $\tilde{g}$ and
potential function $h$. Similarly, if we assume that $\lambda_{F}<0$, then there is no shrinking or
steady gradient Yamabe soliton with warped metric $\tilde{g}$ and
potential function $h$.
\end{cor}
Now, by equations \eqref{eq:09} and \eqref{eq:10} in Theorem \ref{eq4} we easily see that a necessary condition for
$M=(\mathbb{R}^{n},\overline{g})\times_{f}F^{m}$ be a gradient
Yamabe soliton with invariant solution $f(\xi)$,
$\varphi(\xi)$ and $h(\xi)$, where
$\xi=\sum_{k=1}^{n}\alpha_{k}x_{k},\alpha_{k}\in\mathbb{R}$ is that
$h$ is a monotone function. That is,
\begin{equation}h'(\xi)=\frac{\alpha}{\varphi^{2}(\xi)},\nonumber
\end{equation}
for some $\alpha\in\mathbb{R}$.
We provide solutions for ODE in Theorem \ref{eq4} in two cases:
$h'=0$ and $h'\neq0$, with metric $\tilde{g}=g_{B}\oplus f^{2}g_{F}$
be a steady gradient Yamabe soliton, i.e. $\rho=0$, and $F$ is a
scalar-flat pseudo-Riemannian manifold.
\begin{theorem}\label{eq5}In the context of Theorem \ref{eq4}, if $\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=\varepsilon_{k_{0}}\neq 0$ and $F$ scalar-flat fiber, then the warped product metric $\tilde{g}$ is a steady gradient
Yamabe soliton with potential function $h$ and $h'\neq0$ if, and
only if, $f$, $h$ and $\varphi$, satisfies
\begin{equation}\label{eqa}f(\xi)=\frac{e^{c}}{\varphi(\xi)},
\end{equation}
\begin{equation}\label{eqb}h(\xi)=\alpha\int\frac{1}{\varphi^{2}(\xi)}d\xi,
\end{equation}
\begin{equation}\label{eqc}(n+m-1)(n+m+2)\int\frac{\varphi d\varphi}{\alpha-\frac{2\beta(n+m-1)(n+m+2)}{n+m-2}\varphi^{\frac{n+m}{2}+1}}=\xi+\nu,\hspace{0,2cm}c\in\mathbb{R}
\end{equation}
where $c$, $\nu$, $\alpha$, $\beta\in \mathbb{R}$ with
$\alpha\neq0$.
\end{theorem}
\begin{theorem}\label{eq7} In the context of Theorem \ref{eq4}, if $\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=\varepsilon_{k_{0}}\neq 0$ and $F$ scalar-flat fiber, then, given a smooth function $\varphi>0$, the warped product metric
$\tilde{g}$ is a steady gradient Yamabe soliton with potential
function $h$ and $h'=0$ if, and only if, $f$ and $h$, satisfies
\begin{equation}\label{eq23}h(\xi)=\text{constant},
\end{equation}
\begin{equation}\label{eq22}f=\varphi^{\frac{n-2}{m+1}}e^{\int z_{p}d\xi}\left(\int e^{-(m+1)\int z_{p}d\xi}d\xi+\frac{2}{m+1}C\right)^{\frac{2}{m+1}}
\end{equation}
for an appropriate function $z_{p}$.
\end{theorem}
In the null case $\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=0$ we
obtain
\begin{theorem}\label{eq6} In the context of Theorem \ref{eq4}, if $\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=0$ and $F$ scalar-flat fiber, then, given smooth
functions $\varphi(\xi)$ and $f(\xi)$,the warped product metric
$\tilde{g}$ is a steady gradient Yamabe soliton with potential
function $h$ if, and only if
\begin{equation}h(\xi)=\alpha\int\frac{1}{\varphi^{2}(\xi)}d\xi.\nonumber
\end{equation}
\end{theorem}
\begin{observation}As pointed in \cite{Chenn}, see Theorem 3.6, a necessary and
sufficient condition to warped product $B\times_{f}F$ be a
conformally flat is that the function $f$ defines a global conformal
deformation such that $(B,\frac{1}{f^{2}}g_{B})$ is a space of
constant curvature $c$ and $F$ has constant curvature $-c$. With
this observation, we see that the solutions of theorems \ref{eq5},
\ref{eq7} and \ref{eq6} defines a non locally conformally flat
metric if the warping function $f$ is not constant.
\end{observation}
\begin{observation}As we can see in the proof of Theorem \ref{eq4}, if $\rho$ is a function defined only on the
base, then we can easily extend Theorem \ref{eq4} into context of
almost gradient Yamabe solitons. In the particular case of lightlike
vectors there are infinitely many solutions, that is, given
$\varphi$ and $f$
\begin{equation}\rho(\xi)=\frac{\lambda_{F}}{f(\xi)^{2}}\nonumber
\end{equation}
\begin{equation}h(\xi)=\alpha\int\frac{1}{\varphi^{2}(\xi)}d\xi\nonumber
\end{equation}
provide a family of almost gradient Yamabe soliton with warped
product structure.
\end{observation}
Before proving our main results, we present some examples
illustrating the above theorems.
\begin{example}In Theorem \ref{eq5}, consider $\beta=0$, then we
have
\begin{equation}f(\xi)=\frac{e^{c}\sqrt{(n-1)(n+m+2)}}{\sqrt{2\alpha(\xi+\nu)}},\nonumber
\end{equation}
\begin{equation}h(\xi)=\frac{\alpha}{2}(n-1)(n+m+2)\ln|\xi+\nu|\nonumber
\end{equation}
\begin{equation}\varphi(\xi)=\sqrt{\frac{2\alpha(\xi+\nu)}{(n-1)(n+m+2)}}\nonumber
\end{equation}
where $\alpha(\xi+\nu)>0$ and $c\in\mathbb{R}$. Thus, the metric
$\tilde{g}=\frac{1}{\varphi^{2}}g_{0}\oplus f^{2}g_{F}$ is a steady
gradient Yamabe soliton defined in the semi-space of Euclidean space
$\mathbb{R}^{n}$ with potential function $h$.
\end{example}
\begin{example}In Theorem \ref{eq7} consider the warped product $M=(R^{2},\bar{g})\times_{f}F^{3}$. Given the function $\varphi(\xi)=e^{\frac{3\xi^{2}}{4}}$
we have that
\begin{equation}\bar{g}=e^{-\frac{3\xi^{2}}{2}}g_0, \hspace{0,3cm} h(\xi)=\text{constant},\hspace{0,3cm}f(\xi)=e^{\frac{\xi}{2}}\nonumber
\end{equation}
defines a steady gradient Yamabe soliton in warped metric.
\end{example}
\begin{example}In Theorem \ref{eq6} consider the Lorentzian space $(\mathbb{R}^{n},g)$ with coordinates $(x_{1},\dots,x_{n})$
and signature $\epsilon_{1}=-1$, $\epsilon_{k}=1$ for all $k\geq2$,
and $F^{m}$ a complete scalar flat manifold. Let $\xi=x_{1}+x_{2}$
and choose $\varphi(\xi)=\frac{1}{1+\xi^{2}}$. Then, for
$\alpha\neq0$
\begin{equation}\bar{g}=(1+\xi^{2})^{2}g, \hspace{0,3cm} h(\xi)=\alpha\left(\xi+\frac{2}{3}\xi^{3}+\frac{1}{5}\xi^{5}\right),\hspace{0,3cm}f\in\mathcal{C}^{\infty}\nonumber
\end{equation}
defines a steady gradient Yamabe soliton
$(\mathbb{R}^{n},\bar{g})\times_{f}(F^{m},g_{flat})$ with potential
function $h$ and warping function $f$. Observe that, since the
conformal function $\varphi$ is bounded, we have that $\bar{g}$ is
complete, and consequently
$\tilde{g}=\frac{1}{\varphi^{2}}g_{0}\oplus f^{2}g_{F}$ is complete.
| 4,049 | 26,288 |
en
|
train
|
0.4983.3
|
\begin{equation}\label{eq23}h(\xi)=\text{constant},
\end{equation}
\begin{equation}\label{eq22}f=\varphi^{\frac{n-2}{m+1}}e^{\int z_{p}d\xi}\left(\int e^{-(m+1)\int z_{p}d\xi}d\xi+\frac{2}{m+1}C\right)^{\frac{2}{m+1}}
\end{equation}
for an appropriate function $z_{p}$.
\end{theorem}
In the null case $\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=0$ we
obtain
\begin{theorem}\label{eq6} In the context of Theorem \ref{eq4}, if $\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=0$ and $F$ scalar-flat fiber, then, given smooth
functions $\varphi(\xi)$ and $f(\xi)$,the warped product metric
$\tilde{g}$ is a steady gradient Yamabe soliton with potential
function $h$ if, and only if
\begin{equation}h(\xi)=\alpha\int\frac{1}{\varphi^{2}(\xi)}d\xi.\nonumber
\end{equation}
\end{theorem}
\begin{observation}As pointed in \cite{Chenn}, see Theorem 3.6, a necessary and
sufficient condition to warped product $B\times_{f}F$ be a
conformally flat is that the function $f$ defines a global conformal
deformation such that $(B,\frac{1}{f^{2}}g_{B})$ is a space of
constant curvature $c$ and $F$ has constant curvature $-c$. With
this observation, we see that the solutions of theorems \ref{eq5},
\ref{eq7} and \ref{eq6} defines a non locally conformally flat
metric if the warping function $f$ is not constant.
\end{observation}
\begin{observation}As we can see in the proof of Theorem \ref{eq4}, if $\rho$ is a function defined only on the
base, then we can easily extend Theorem \ref{eq4} into context of
almost gradient Yamabe solitons. In the particular case of lightlike
vectors there are infinitely many solutions, that is, given
$\varphi$ and $f$
\begin{equation}\rho(\xi)=\frac{\lambda_{F}}{f(\xi)^{2}}\nonumber
\end{equation}
\begin{equation}h(\xi)=\alpha\int\frac{1}{\varphi^{2}(\xi)}d\xi\nonumber
\end{equation}
provide a family of almost gradient Yamabe soliton with warped
product structure.
\end{observation}
Before proving our main results, we present some examples
illustrating the above theorems.
\begin{example}In Theorem \ref{eq5}, consider $\beta=0$, then we
have
\begin{equation}f(\xi)=\frac{e^{c}\sqrt{(n-1)(n+m+2)}}{\sqrt{2\alpha(\xi+\nu)}},\nonumber
\end{equation}
\begin{equation}h(\xi)=\frac{\alpha}{2}(n-1)(n+m+2)\ln|\xi+\nu|\nonumber
\end{equation}
\begin{equation}\varphi(\xi)=\sqrt{\frac{2\alpha(\xi+\nu)}{(n-1)(n+m+2)}}\nonumber
\end{equation}
where $\alpha(\xi+\nu)>0$ and $c\in\mathbb{R}$. Thus, the metric
$\tilde{g}=\frac{1}{\varphi^{2}}g_{0}\oplus f^{2}g_{F}$ is a steady
gradient Yamabe soliton defined in the semi-space of Euclidean space
$\mathbb{R}^{n}$ with potential function $h$.
\end{example}
\begin{example}In Theorem \ref{eq7} consider the warped product $M=(R^{2},\bar{g})\times_{f}F^{3}$. Given the function $\varphi(\xi)=e^{\frac{3\xi^{2}}{4}}$
we have that
\begin{equation}\bar{g}=e^{-\frac{3\xi^{2}}{2}}g_0, \hspace{0,3cm} h(\xi)=\text{constant},\hspace{0,3cm}f(\xi)=e^{\frac{\xi}{2}}\nonumber
\end{equation}
defines a steady gradient Yamabe soliton in warped metric.
\end{example}
\begin{example}In Theorem \ref{eq6} consider the Lorentzian space $(\mathbb{R}^{n},g)$ with coordinates $(x_{1},\dots,x_{n})$
and signature $\epsilon_{1}=-1$, $\epsilon_{k}=1$ for all $k\geq2$,
and $F^{m}$ a complete scalar flat manifold. Let $\xi=x_{1}+x_{2}$
and choose $\varphi(\xi)=\frac{1}{1+\xi^{2}}$. Then, for
$\alpha\neq0$
\begin{equation}\bar{g}=(1+\xi^{2})^{2}g, \hspace{0,3cm} h(\xi)=\alpha\left(\xi+\frac{2}{3}\xi^{3}+\frac{1}{5}\xi^{5}\right),\hspace{0,3cm}f\in\mathcal{C}^{\infty}\nonumber
\end{equation}
defines a steady gradient Yamabe soliton
$(\mathbb{R}^{n},\bar{g})\times_{f}(F^{m},g_{flat})$ with potential
function $h$ and warping function $f$. Observe that, since the
conformal function $\varphi$ is bounded, we have that $\bar{g}$ is
complete, and consequently
$\tilde{g}=\frac{1}{\varphi^{2}}g_{0}\oplus f^{2}g_{F}$ is complete.
\end{example}
\begin{section}{Proofs of the Main Results}
\begin{myproof}[\textbf{Proof of Proposition 1.1}]Let $M=B\times_{f}F$ be a gradient Yamabe soliton with potential function $h:M\rightarrow\mathbb{R}$,
then by equation \eqref{eq8}, we obtain
\begin{equation}\label{eq9}(S_{\tilde{g}}-\rho)\tilde{g}=Hess(h)
\end{equation}
Now, It is well known that on warped metric $\tilde{g}$ the scalar
curvature of base $B$, fiber $F$ and $M$ is related by(see chapter 7
of \cite{oneil})
\begin{equation}S_{\tilde{g}}=S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\frac{2d}{f}\Delta^{B}f-d(d-1)\frac{\langle grad_{B}f,grad_{B}f\rangle}{f^{2}}\label{eq:04}
\end{equation}
where $\Delta^{B}$ denote the laplacian defined on $B$. Then
considering $X_{1},X_{2},\dots,X_{n}\in \mathcal{L}(B)$ and
$Y_{1},Y_{2},\dots,Y_{m}\in \mathcal{L}(F)$, where $\mathcal{L}(B)$
and $\mathcal{L}(F)$ are respectively the space of lifts of vector
fiels on $B$ and $F$ to $B\times F$, we obtain substituting equation
\eqref{eq:04} in \eqref{eq9} that
\begin{equation*}
\begin{cases}
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\frac{2d}{f}\Delta^{B}f-d(d-1)\frac{\langle
grad_{B}f,grad_{B}f\rangle}{f^{2}}-\rho\Big{)}g_{B}(X_{i},X_{j})=Hess_{\tilde{g}}h(X_{i},X_{j})&(i)\\
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\frac{2d}{f}\Delta^{B}f-d(d-1)\frac{\langle
grad_{B}f,grad_{B}f\rangle}{f^{2}}-\rho\Big{)}\tilde{g}(X_{i},Y_{j})=Hess_{\tilde{g}}h(X_{i},Y_{j})&(ii) \\
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\frac{2d}{f}\Delta^{B}f-d(d-1)\frac{\langle
grad_{B}f,grad_{B}f\rangle}{f^{2}}-\rho\Big{)}f^{2}g_{F}(Y_{i},Y_{j})=Hess_{\tilde{g}}h(Y_{i},Y_{j}).&(iii)
\end{cases}
\end{equation*}
Thus, using the fact $\tilde{g}(X_{i},Y_{j})=0$, we obtain by expression $(ii)$ that
\begin{equation}Hess(h)(X_{i},Y_{j})=0.\nonumber
\end{equation}
Now, using Lemma 2.1 of \cite{He}, we obtain
\begin{equation}\label{eqe}h(x,y)=z(x)+f(x)v(y)
\end{equation}
where $z:B\rightarrow\mathbb{R}$ and $v:F\rightarrow\mathbb{R}$.
Then since
$$Hess_{\tilde{g}}h(X_{i},X_{j})=Hess_{g_{B}}h(X_{i},X_{j}),$$
we have by expression $(i)$ that
\begin{equation}\label{eqd}\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\frac{2d}{f}\Delta^{B}f-d(d-1)\frac{\langle
grad_{B}f,grad_{B}f\rangle}{f^{2}}-\rho\Big{)}g_{B}(X_{i},X_{j})=Hess_{g_{B}}h(X_{i},X_{j}).
\end{equation}
We have from \eqref{eqe} that the right size of \eqref{eqd} is given by
\begin{equation}\label{eqf}Hess_{g_{B}}z+vHess_{g_{B}}f.
\end{equation}
By hypothesis, there are two ortogonal vectors fields $X_{i}$, $X_{j}$ such that $Hess_{g_{B}}f(X_{i},X_{j})\neq0$. Then combining this fact with equations \eqref{eqd} and \eqref{eqf}, we obtain
\begin{equation}v=-\frac{Hess_{g_{B}}z(X_{i},X_{j})}{Hess_{g_{B}}f(X_{i},X_{j})}.
\end{equation}
This show that $v(y)$ is constant, and then by expression \eqref{eqe} we have that $h$ depends only on the base.
\end{myproof}
\begin{myproof}[\textbf{Proof of Theorem \ref{theo1.1}}]Let $M=B\times_{f}F$ be a warped product with gradient Yamabe soliton structure and potential function
$h(x,y)=h_{1}(x)+h_{2}(y)$. In the same way as in the proof of Proposition \ref{eq3},
for $X_{1},X_{2},\dots,X_{n}\in \mathcal{L}(B)$ and
$Y_{1},Y_{2},\dots,Y_{m}\in \mathcal{L}(F)$ we obtain
\begin{equation}Hess(h)(X_{i},Y_{j})=0.\nonumber
\end{equation}
As we know, the connection of warped product is particularly simple, that is, for \newline$X\in \mathcal{L}(B)$ and $Y\in \mathcal{L}(F)$, we have
$$\nabla_{X}Y=\nabla_{Y}X=\frac{X(f)}{f}Y.$$
Thus,
\begin{equation}Hess(h)(X_{i},Y_{j})=X_{i}(Y_{j}(h))-(\nabla_{X_{i}}Y_{j})h=X_{i}(Y_{j}(h))-\frac{X_{i}(f)}{f}Y_{j}=0.\nonumber
\end{equation}
Establishing the notation $h_{,x_{i}}=X_{i}(h)$,
$h_{,x_{i}x_{j}}=X_{j}(X_{i}(h))$, we have that
\begin{equation}h_{,y_{j}x_{i}}-\frac{f_{,x_{i}}}{f}h_{,y_{j}}=0-\frac{f_{,x_{i}}}{f}(h_{2})_{,y_{j}}=0\hspace{0,4cm}\forall i,j.\nonumber
\end{equation}
Then, $f$ is constant or $h(x,y)=h_{1}(x)+constant$. We separate the
proof in three cases:
$Case (I):$ ($f$ is constant and $h(x,y)=h_{1}(x)+constant$). In
this case, $M=B\times_{f}F$ is a Riemannian product and we have
\begin{equation*}
\begin{cases}
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\rho\Big{)}g_{B}(X_{i},X_{j})=Hess_{g_{B}}h_{1}(X_{i},X_{j})&(i)\\
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\rho\Big{)}\tilde{g}(X_{i},Y_{j})=Hess_{\tilde{g}}h(X_{i},Y_{j})=0&(ii) \\
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\rho\Big{)}f^{2}g_{F}(Y_{i},Y_{j})=Hess_{g_{F}}h_{2}(Y_{i},Y_{j})+f\nabla f(h_{1})g_{F}(Y_{i},Y_{j})&(iii)
\end{cases}
\end{equation*}
where in equation $(iii)$ we use Proposition 35 of \cite{oneil} and
the Hessian definition to get
\begin{eqnarray}\label{eq13}
Hess_{\tilde{g}}h(Y_{i},Y_{j})& = &
Y_{i}(Y_{j}(h))-(\nabla_{Y_{i}}Y_{j})^{M}h\\
&=& Y_{i}(Y_{j}(h))-(\mathcal{H}(\nabla_{Y_{i}}Y_{j})+\mathcal{V}(\nabla_{Y_{i}}Y_{j}))(h)\nonumber\\
&=& Y_{i}(Y_{j}(h))+\frac{\langle Y_{i},
Y_{j}\rangle}{f}grad_{\tilde{g}}f(h)-\nabla_{Y_{i}}^{F}Y_{j}(h)\nonumber\\
&=& Y_{i}(Y_{j}(h))+fg_{F}(Y_{i},
Y_{j})grad_{\tilde{g}}f(h)-\nabla_{Y_{i}}^{F}Y_{j}(h)\nonumber\\
&=&Hess_{g_{F}}h_{2}(Y_{i},Y_{j})+f\nabla
f(h_{1})g_{F}(Y_{i},Y_{j}).\nonumber
\end{eqnarray}
Since $S_{g_{F}}$ is constant on $B$, we have from $(i)$ that $B$
is a gradient Yamabe soliton of the form $(B,g_{B},\nabla
h_{1},-\frac{S_{g_{F}}}{f^{2}}+\rho)$. Furthermore, since
$h(x,y)=h_{1}(x)+cte$ we have by $(iii)$ that $F$ is a trivial
gradient Yamabe soliton of the form $(F,g_{F},\nabla
0,f^{2}\rho-f^{2}S_{g_{B}})$. This proves the item $(a)$.
| 3,979 | 26,288 |
en
|
train
|
0.4983.4
|
\begin{equation}\label{eqf}Hess_{g_{B}}z+vHess_{g_{B}}f.
\end{equation}
By hypothesis, there are two ortogonal vectors fields $X_{i}$, $X_{j}$ such that $Hess_{g_{B}}f(X_{i},X_{j})\neq0$. Then combining this fact with equations \eqref{eqd} and \eqref{eqf}, we obtain
\begin{equation}v=-\frac{Hess_{g_{B}}z(X_{i},X_{j})}{Hess_{g_{B}}f(X_{i},X_{j})}.
\end{equation}
This show that $v(y)$ is constant, and then by expression \eqref{eqe} we have that $h$ depends only on the base.
\end{myproof}
\begin{myproof}[\textbf{Proof of Theorem \ref{theo1.1}}]Let $M=B\times_{f}F$ be a warped product with gradient Yamabe soliton structure and potential function
$h(x,y)=h_{1}(x)+h_{2}(y)$. In the same way as in the proof of Proposition \ref{eq3},
for $X_{1},X_{2},\dots,X_{n}\in \mathcal{L}(B)$ and
$Y_{1},Y_{2},\dots,Y_{m}\in \mathcal{L}(F)$ we obtain
\begin{equation}Hess(h)(X_{i},Y_{j})=0.\nonumber
\end{equation}
As we know, the connection of warped product is particularly simple, that is, for \newline$X\in \mathcal{L}(B)$ and $Y\in \mathcal{L}(F)$, we have
$$\nabla_{X}Y=\nabla_{Y}X=\frac{X(f)}{f}Y.$$
Thus,
\begin{equation}Hess(h)(X_{i},Y_{j})=X_{i}(Y_{j}(h))-(\nabla_{X_{i}}Y_{j})h=X_{i}(Y_{j}(h))-\frac{X_{i}(f)}{f}Y_{j}=0.\nonumber
\end{equation}
Establishing the notation $h_{,x_{i}}=X_{i}(h)$,
$h_{,x_{i}x_{j}}=X_{j}(X_{i}(h))$, we have that
\begin{equation}h_{,y_{j}x_{i}}-\frac{f_{,x_{i}}}{f}h_{,y_{j}}=0-\frac{f_{,x_{i}}}{f}(h_{2})_{,y_{j}}=0\hspace{0,4cm}\forall i,j.\nonumber
\end{equation}
Then, $f$ is constant or $h(x,y)=h_{1}(x)+constant$. We separate the
proof in three cases:
$Case (I):$ ($f$ is constant and $h(x,y)=h_{1}(x)+constant$). In
this case, $M=B\times_{f}F$ is a Riemannian product and we have
\begin{equation*}
\begin{cases}
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\rho\Big{)}g_{B}(X_{i},X_{j})=Hess_{g_{B}}h_{1}(X_{i},X_{j})&(i)\\
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\rho\Big{)}\tilde{g}(X_{i},Y_{j})=Hess_{\tilde{g}}h(X_{i},Y_{j})=0&(ii) \\
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\rho\Big{)}f^{2}g_{F}(Y_{i},Y_{j})=Hess_{g_{F}}h_{2}(Y_{i},Y_{j})+f\nabla f(h_{1})g_{F}(Y_{i},Y_{j})&(iii)
\end{cases}
\end{equation*}
where in equation $(iii)$ we use Proposition 35 of \cite{oneil} and
the Hessian definition to get
\begin{eqnarray}\label{eq13}
Hess_{\tilde{g}}h(Y_{i},Y_{j})& = &
Y_{i}(Y_{j}(h))-(\nabla_{Y_{i}}Y_{j})^{M}h\\
&=& Y_{i}(Y_{j}(h))-(\mathcal{H}(\nabla_{Y_{i}}Y_{j})+\mathcal{V}(\nabla_{Y_{i}}Y_{j}))(h)\nonumber\\
&=& Y_{i}(Y_{j}(h))+\frac{\langle Y_{i},
Y_{j}\rangle}{f}grad_{\tilde{g}}f(h)-\nabla_{Y_{i}}^{F}Y_{j}(h)\nonumber\\
&=& Y_{i}(Y_{j}(h))+fg_{F}(Y_{i},
Y_{j})grad_{\tilde{g}}f(h)-\nabla_{Y_{i}}^{F}Y_{j}(h)\nonumber\\
&=&Hess_{g_{F}}h_{2}(Y_{i},Y_{j})+f\nabla
f(h_{1})g_{F}(Y_{i},Y_{j}).\nonumber
\end{eqnarray}
Since $S_{g_{F}}$ is constant on $B$, we have from $(i)$ that $B$
is a gradient Yamabe soliton of the form $(B,g_{B},\nabla
h_{1},-\frac{S_{g_{F}}}{f^{2}}+\rho)$. Furthermore, since
$h(x,y)=h_{1}(x)+cte$ we have by $(iii)$ that $F$ is a trivial
gradient Yamabe soliton of the form $(F,g_{F},\nabla
0,f^{2}\rho-f^{2}S_{g_{B}})$. This proves the item $(a)$.
$Case (II):$ ($f$ is constant and $h(x,y)=h_{1}(x)+h_{2}$, $h_{2}$
not necessarily constant). In this case, $M=B\times_{f}F$ is a
Riemannian product and we have
\begin{equation*}
\begin{cases}
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\rho\Big{)}g_{B}(X_{i},X_{j})=Hess_{g_{B}}h_{1}(X_{i},X_{j})&(i)\\
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\rho\Big{)}\tilde{g}(X_{i},Y_{j})=Hess_{\tilde{g}}h(X_{i},Y_{j})=0&(ii) \\
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\rho\Big{)}f^{2}g_{F}(Y_{i},Y_{j})=Hess_{g_{F}}h_{2}(Y_{i},Y_{j})+f\nabla f(h_{1})g_{F}(Y_{i},Y_{j}).&(iii)
\end{cases}
\end{equation*}
Since $S_{g_{F}}$ is constant on $B$, we have that $B$ is a gradient
Yamabe soliton of the form $(B,g_{B},\nabla
h_{1},-\frac{S_{g_{F}}}{f^{2}}+\rho)$. Furthermore, by equation
$(iii)$ we have that $F$ is a gradient Yamabe soliton of the form
$(F,g_{F},\nabla h_{2},f^{2}\rho-f^{2}S_{g_{B}})$. This proves the
item $(b)$.
$Case (III):$ ($f$ is non constant and $h(x,y)=h_{1}(x)+constant$).
In this case we have:
\begin{equation*}
\begin{cases}
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\frac{2d}{f}\Delta^{B}f-d(d-1)\frac{\langle
grad_{B}f,grad_{B}f\rangle}{f^{2}}-\rho\Big{)}g_{B}(X_{i},X_{j})=Hess_{g_{B}}h_{1}(X_{i},X_{j})&(i)\\
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\frac{2d}{f}\Delta^{B}f-d(d-1)\frac{\langle
grad_{B}f,grad_{B}f\rangle}{f^{2}}-\rho\Big{)}\tilde{g}(X_{i},Y_{j})=Hess_{\tilde{g}}h(X_{i},Y_{j})=0&(ii) \\
\Big{(}S_{g_{B}}+\frac{S_{g_{F}}}{f^{2}}-\frac{2d}{f}\Delta^{B}f-d(d-1)\frac{\langle
grad_{B}f,grad_{B}f\rangle}{f^{2}}-\rho\Big{)}f^{2}g_{F}(Y_{i},Y_{j})=f\nabla
f(h)g_{F}(Y_{i},Y_{j})&(iii)
\end{cases}
\end{equation*}
Since $f>0$, by equation $(iii)$ we have that
\begin{equation} (S_{g_{F}}-\psi) g_{F}(Y_{i},Y_{j})=0
\end{equation}
where $\psi=-f^{2}S_{g_{B}}+2fd\Delta^{B}f+d(d-1)\langle
grad_{B}f,grad_{B}f\rangle+f^{2}\rho+f\nabla f(h)$.
Now, since $\psi$ depend only on $B$, we have that $\psi$ is
constant on $F$, then by equation $(iii)$, we have that $F$ is a
trivial gradient Yamabe soliton. Furthermore, by equation $(i)$ we
have that $(B,g_{B})$ is a gradient almost Yamabe soliton of the
form
$$
(B,g_{B},\nabla
h_{1},-[\frac{S_{g_{F}}}{f^{2}}-\frac{2d}{f}\Delta^{B}f-d(d-1)\frac{\langle
grad_{B}f,grad_{B}f\rangle}{f^{2}}-\rho] ).
$$
This proves the item $(c)$.
\end{myproof}
\begin{myproof}[\textbf{Proof of Theorem \ref{eq:02}}]
Let $M$ be a warped product with gradient Yamabe soliton structure
and potential function $h$, that is,
\begin{equation}(S_{\tilde{g}}-\rho)\tilde{g}=Hess_{\tilde{g}}(h).\label{eq:200}
\end{equation}
By the same arguments used in proof of Proposition \ref{eq3}, for
$X_{1},X_{2},\dots,X_{n}\in \mathcal{L}(B)$ and
$Y_{1},Y_{2},\dots,Y_{m}\in \mathcal{L}(F)$ we obtain
\begin{equation*}
\begin{cases}
\Big{(}S_{\overline{g}}+\frac{S_{g_{F}}}{f^{2}}-\frac{2m}{f}\Delta_{\overline{g}}f-m(m-1)\frac{\langle
grad_{\overline{g}}f,grad_{\overline{g}}f\rangle}{f^{2}}-\rho\Big{)}\overline{g}(X_{i},X_{j})=Hess_{\tilde{g}}h(X_{i},X_{j})&(i)\\
\Big{(}S_{\overline{g}}+\frac{S_{g_{F}}}{f^{2}}-\frac{2m}{f}\Delta_{\overline{g}}f-m(m-1)\frac{\langle
grad_{\overline{g}}f,grad_{\overline{g}}f\rangle}{f^{2}}-\rho\Big{)}\tilde{g}(X_{i},Y_{j})=Hess_{\tilde{g}}h(X_{i},Y_{j})=0&(ii) \\
\Big{(}S_{\overline{g}}+\frac{S_{g_{F}}}{f^{2}}-\frac{2m}{f}\Delta_{\overline{g}}f-m(m-1)\frac{\langle
grad_{\overline{g}}f,grad_{\overline{g}}f\rangle}{f^{2}}-\rho\Big{)}f^{2}g_{F}(Y_{i},Y_{j})=Hess_{\tilde{g}}h(Y_{i},Y_{j}).&(iii)
\end{cases}
\end{equation*}
It is well known that for the conformal metric
$\bar{g}=\frac{1}{\varphi^{2}}g_{0}$, the Christofel symbol is given
by
\begin{equation}\bar{\Gamma}_{ij}^{k}=0,\ \bar{\Gamma}_{ij}^{i}=-\frac{\varphi_{,x_{j}}}{\varphi},\ \bar{\Gamma}_{ii}^{k}=\epsilon_{i}\epsilon_{k}\frac{\varphi_{,x_{k}}}{\varphi}\;\ \mbox{and}\;\ \bar{\Gamma}_{ii}^{i}=-\frac{\varphi_{,x_{i}}}{\varphi}.\nonumber
\end{equation}
Then, we obtain by Hessian definiton that
\begin{equation}\label{eq:22}
\begin{cases}
Hess_{\overline{g}}(h)_{ij}=h_{,x_{i}x_{j}}+\frac{\varphi_{,x_{i}}h_{,x_{j}}}{\varphi}+\frac{\varphi_{,x_{j}}h_{,x_{i}}}{\varphi}& i\neq j\\
Hess_{\overline{g}}(h)_{ii}=h_{,x_{i}x_{i}}+2\frac{\varphi_{,x_{i}}h_{,x_{i}}}{\varphi}-\varepsilon_{i}\sum_{k=1}^{n}\varepsilon_{k}\frac{\varphi_{,x_{k}}}{\varphi}h_{,x_{k}}&i=j.
\end{cases}
\end{equation}
The Ricci curvature is given by
$$Ric_{\overline{g}}=\frac{1}{\varphi^{2}}\Big{\{}(n-2)\varphi
Hess_{g}(\varphi)+[\varphi\Delta_{g}\varphi-(n-1)|\nabla_{g}\varphi|^{2}]g\Big{\}}$$
and then we easily see that the scalar curvature on conformal metric
is given by
\begin{equation}\label{eq12}S_{\overline{g}}=(n-1)(2\varphi\Delta_{g}\varphi-n|\nabla_{g}\varphi|^{2})=(n-1)(2\varphi\sum_{k=1}^{n}\varepsilon_{k}\varphi_{,x_{k}x_{k}}-n\sum_{k=1}^{n}\varepsilon_{k}\varphi_{,x_{k}}^{2}).
\end{equation}
Since $h:\mathbb{R}^{n}\rightarrow\mathbb{R}$, we obtain
\begin{equation}\label{eq:24}
Hess_{\tilde{g}}h(X_{i},X_{j})=Hess_{\overline{g}}h(X_{i},X_{j}),
\hspace{0,4cm} \forall i,j.
\end{equation}
On the other hand
\begin{equation}\label{eq:23}
\begin{cases}
S_{F}g_{F}=\lambda_{F}g_{F}\\
\tilde{g}(Y_{i},Y_{j})=f^{2}g_{F}(Y_{i},Y_{j})\\
\Delta_{\overline{g}}f=\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}x_{k}}-(n-2)\varphi\sum_{k}\varepsilon_{k}\varphi_{,x_{k}}f_{,x_{k}}\\
\tilde{g}(grad_{\overline{g}}f,grad_{\overline{g}}f)=\overline{g}(grad_{\overline{g}}f,grad_{\overline{g}}f)=\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}}^{2}.
\end{cases}
\end{equation}
| 4,034 | 26,288 |
en
|
train
|
0.4983.5
|
\begin{myproof}[\textbf{Proof of Theorem \ref{eq:02}}]
Let $M$ be a warped product with gradient Yamabe soliton structure
and potential function $h$, that is,
\begin{equation}(S_{\tilde{g}}-\rho)\tilde{g}=Hess_{\tilde{g}}(h).\label{eq:200}
\end{equation}
By the same arguments used in proof of Proposition \ref{eq3}, for
$X_{1},X_{2},\dots,X_{n}\in \mathcal{L}(B)$ and
$Y_{1},Y_{2},\dots,Y_{m}\in \mathcal{L}(F)$ we obtain
\begin{equation*}
\begin{cases}
\Big{(}S_{\overline{g}}+\frac{S_{g_{F}}}{f^{2}}-\frac{2m}{f}\Delta_{\overline{g}}f-m(m-1)\frac{\langle
grad_{\overline{g}}f,grad_{\overline{g}}f\rangle}{f^{2}}-\rho\Big{)}\overline{g}(X_{i},X_{j})=Hess_{\tilde{g}}h(X_{i},X_{j})&(i)\\
\Big{(}S_{\overline{g}}+\frac{S_{g_{F}}}{f^{2}}-\frac{2m}{f}\Delta_{\overline{g}}f-m(m-1)\frac{\langle
grad_{\overline{g}}f,grad_{\overline{g}}f\rangle}{f^{2}}-\rho\Big{)}\tilde{g}(X_{i},Y_{j})=Hess_{\tilde{g}}h(X_{i},Y_{j})=0&(ii) \\
\Big{(}S_{\overline{g}}+\frac{S_{g_{F}}}{f^{2}}-\frac{2m}{f}\Delta_{\overline{g}}f-m(m-1)\frac{\langle
grad_{\overline{g}}f,grad_{\overline{g}}f\rangle}{f^{2}}-\rho\Big{)}f^{2}g_{F}(Y_{i},Y_{j})=Hess_{\tilde{g}}h(Y_{i},Y_{j}).&(iii)
\end{cases}
\end{equation*}
It is well known that for the conformal metric
$\bar{g}=\frac{1}{\varphi^{2}}g_{0}$, the Christofel symbol is given
by
\begin{equation}\bar{\Gamma}_{ij}^{k}=0,\ \bar{\Gamma}_{ij}^{i}=-\frac{\varphi_{,x_{j}}}{\varphi},\ \bar{\Gamma}_{ii}^{k}=\epsilon_{i}\epsilon_{k}\frac{\varphi_{,x_{k}}}{\varphi}\;\ \mbox{and}\;\ \bar{\Gamma}_{ii}^{i}=-\frac{\varphi_{,x_{i}}}{\varphi}.\nonumber
\end{equation}
Then, we obtain by Hessian definiton that
\begin{equation}\label{eq:22}
\begin{cases}
Hess_{\overline{g}}(h)_{ij}=h_{,x_{i}x_{j}}+\frac{\varphi_{,x_{i}}h_{,x_{j}}}{\varphi}+\frac{\varphi_{,x_{j}}h_{,x_{i}}}{\varphi}& i\neq j\\
Hess_{\overline{g}}(h)_{ii}=h_{,x_{i}x_{i}}+2\frac{\varphi_{,x_{i}}h_{,x_{i}}}{\varphi}-\varepsilon_{i}\sum_{k=1}^{n}\varepsilon_{k}\frac{\varphi_{,x_{k}}}{\varphi}h_{,x_{k}}&i=j.
\end{cases}
\end{equation}
The Ricci curvature is given by
$$Ric_{\overline{g}}=\frac{1}{\varphi^{2}}\Big{\{}(n-2)\varphi
Hess_{g}(\varphi)+[\varphi\Delta_{g}\varphi-(n-1)|\nabla_{g}\varphi|^{2}]g\Big{\}}$$
and then we easily see that the scalar curvature on conformal metric
is given by
\begin{equation}\label{eq12}S_{\overline{g}}=(n-1)(2\varphi\Delta_{g}\varphi-n|\nabla_{g}\varphi|^{2})=(n-1)(2\varphi\sum_{k=1}^{n}\varepsilon_{k}\varphi_{,x_{k}x_{k}}-n\sum_{k=1}^{n}\varepsilon_{k}\varphi_{,x_{k}}^{2}).
\end{equation}
Since $h:\mathbb{R}^{n}\rightarrow\mathbb{R}$, we obtain
\begin{equation}\label{eq:24}
Hess_{\tilde{g}}h(X_{i},X_{j})=Hess_{\overline{g}}h(X_{i},X_{j}),
\hspace{0,4cm} \forall i,j.
\end{equation}
On the other hand
\begin{equation}\label{eq:23}
\begin{cases}
S_{F}g_{F}=\lambda_{F}g_{F}\\
\tilde{g}(Y_{i},Y_{j})=f^{2}g_{F}(Y_{i},Y_{j})\\
\Delta_{\overline{g}}f=\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}x_{k}}-(n-2)\varphi\sum_{k}\varepsilon_{k}\varphi_{,x_{k}}f_{,x_{k}}\\
\tilde{g}(grad_{\overline{g}}f,grad_{\overline{g}}f)=\overline{g}(grad_{\overline{g}}f,grad_{\overline{g}}f)=\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}}^{2}.
\end{cases}
\end{equation}
Now, substituting the second equation of $\eqref{eq:22}$, the
equations of \eqref{eq:23} and equation \eqref{eq12} in $(i)$, we
have
\begin{equation*}
\begin{split}
\Big{[}(n-1)(2\varphi\sum_{k}\varepsilon_{k}\varphi_{,x_{k}x_{k}}-n\sum_{k}\varepsilon_{k}\varphi_{,x_{k}}^{2})+\frac{\lambda_{F}}{f^{2}}-\frac{2m}{f}(\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}x_{k}}-(n-2)\varphi\sum_{k}\varepsilon_{k}\varphi_{,x_{k}}f_{,x_{k}})+\\
-\frac{m(m-1)}{f^2}\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}}^{2}-\rho\Big{]}\frac{\varepsilon_{i}}{\varphi^{2}}=h_{,x_{i}x_{i}}+2\frac{\varphi_{,x_{i}}}{\varphi}h_{,x_{i}}-\varepsilon_{i}\sum_{k}\varepsilon_{k}\frac{\varphi_{,x_{k}}}{\varphi}h_{,x_{k}}
\end{split}
\end{equation*}
which is the equation \eqref{eq:20}.
Analogously, substituting the first equation of \eqref{eq:22} and
equation \eqref{eq:23} in $(i)$, we obtain
\begin{equation}h_{,x_{i}x_{j}}+\frac{\varphi_{,x_{j}}}{\varphi}h_{,x_{i}}+\frac{\varphi_{,x_{i}}}{\varphi}h_{,x_{j}}=0
\end{equation}
which is the equation \eqref{eq:19}.
in the similar way that equation \eqref{eq13}, we have that
\begin{eqnarray}\label{eq:25}
Hess_{\tilde{g}}h(Y_{i},Y_{j})& = &
Y_{i}(Y_{j}(h))-(\nabla_{Y_{i}}Y_{j})^{M}h\nonumber\\
&=&
Hess_{gF}h(Y_{i},Y_{j})+fg_{F}(Y_{i},
Y_{j})grad_{\tilde{g}}f(h)\nonumber\\
&=& fg_{F}(Y_{i},Y_{j})grad_{\tilde{g}}f(h)\nonumber\\
&=&f\varphi^{2}\sum_{k}\varepsilon_{k}f_{,x_{k}}h_{,x_{k}}g_{F}(Y_{i},Y_{j}).
\end{eqnarray}
Then, substituting equation \eqref{eq:25}, \eqref{eq:23} and
equation \eqref{eq12} in $(iii)$, we obtain equation \eqref{eq:21}.
A direct calculation shows us the converse implication. This
concludes the proof of Theorem \ref{eq:02}.
\end{myproof}
\begin{myproof}[\textbf{Proof of Theorem \ref{eq4}}]Since we are assuming that $\varphi(\xi)$, $h(\xi)$
and $f(\xi)$ are functions of $\xi$, where
$\xi=\sum_{k}\alpha_{k}x_{k}$, $\alpha_{k}\in\mathbb{R}^{n}$ and
$\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=\varepsilon_{k_{0}}$ or
$\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=0$, then we have
\begin{equation*}\varphi_{,x_{i}}=\varphi'\alpha_{i};\hspace{0,1cm}
\varphi_{,x_{i}x_{j}}=\varphi''\alpha_{i}\alpha_{j};\hspace{0,1cm}f{,x_{i}}=f'\alpha_{i};\hspace{0,1cm}
f_{,x_{i}x_{j}}=f''\alpha_{i}\alpha_{j};\hspace{0,1cm}
h_{,x_{i}}=h''\alpha_{i};\hspace{0,1cm}
h_{,x_{i}x_{j}}=h''\alpha_{i}\alpha_{j}.
\end{equation*}
Substituting these expressions into \eqref{eq:19} of Theorem \ref{eq:02}, we obtain
\begin{equation}\label{eq:33}\Big{(}h''+2\frac{h'\varphi'}{\varphi}\Big{)}\alpha_{i}\alpha_{j}=0,
\hspace{0,4cm} \forall i\neq j.
\end{equation}
Similarly, considering equations \eqref{eq:20} and \eqref{eq:21} of Theorem \ref{eq:02},
we obtain
\begin{equation}\label{eq:30}
\begin{split}
\Big{[}(n-1)(2\varphi\varphi''\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-n(\varphi')^{2}\sum_{k}\varepsilon_{k}\alpha_{k}^{2})+\frac{\lambda_{F}}{f^{2}}-\frac{2m}{f}(\varphi^{2}f''\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-(n-2)\varphi\varphi'f'\sum_{k}\varepsilon_{k}\alpha_{k}^{2})+\\
-\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-\rho\Big{]}\frac{\varepsilon_{i}}{\varphi^{2}}=h''\alpha_{i}^{2}+2\alpha_{i}^{2}\frac{\varphi'}{\varphi}h'-\varepsilon_{i}h'\frac{\varphi'}{\varphi}\sum_{k}\varepsilon_{k}\alpha_{k}^{2}
\end{split}
\end{equation}
for $i\in\{1,2,\dots,n\}$, and
\begin{equation}\label{eq:31}
\begin{split}
(n-1)(2\varphi\varphi''\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-n(\varphi')^{2}\sum_{k}\varepsilon_{k}\alpha_{k}^{2})+\frac{\lambda_{F}}{f^{2}}-\frac{2m}{f}(\varphi^{2}f''\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-(n-2)\varphi\varphi'f'\sum_{k}\varepsilon_{k}\alpha_{k}^{2})+\\
-\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-\rho=\frac{\varphi^{2}}{f}f'h'\sum_{k}\varepsilon_{k}\alpha_{k}^{2}.
\end{split}
\end{equation}
If there exist $i,j$, $i\neq j$ such that $\alpha_{i}\alpha_{j}\neq
0$, then we get by equation \eqref{eq:33} that
\begin{equation}\label{eq:34}\left(h''+2\frac{h'\varphi'}{\varphi}\right)=0.
\end{equation}
It follows from \eqref{eq:34} that the equation \eqref{eq:30} is
summed to
\begin{equation}
\begin{split}
\Big{[}(n-1)(2\varphi\varphi''\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-n(\varphi')^{2}\sum_{k}\varepsilon_{k}\alpha_{k}^{2})+\frac{\lambda_{F}}{f^{2}}-\frac{2m}{f}(\varphi^{2}f''\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-(n-2)\varphi\varphi'f'\sum_{k}\varepsilon_{k}\alpha_{k}^{2})+\\
-\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-\rho\Big{]}\frac{\varepsilon_{i}}{\varphi^{2}}=-\varepsilon_{i}h'\frac{\varphi'}{\varphi}\sum_{k}\varepsilon_{k}\alpha_{k}^{2}.
\end{split}
\end{equation}
Thus, isolating the term
$\sum_{k}\varepsilon_{k}\alpha_{k}^{2}=\varepsilon_{k_{0}}$, we
obtain the equation \eqref{eq1}.
In the same way, isolating the term
$\sum_{k}\varepsilon_{k}\alpha_{k}^{2}=\varepsilon_{k_{0}}$ in
\eqref{eq:31} we obtain the equation \eqref{eq2}.
Thus, if
$\sum_{k}\varepsilon_{k}\alpha_{k}^{2}=\varepsilon_{k_{0}}$, then we
obtain equations \eqref{eq:09}, \eqref{eq1} and \eqref{eq2}. In the
case \newline$\sum_{k}\varepsilon_{k}\alpha_{k}^{2}=0$, we easily see that
the equation \eqref{eq:09} is summed to
\begin{equation}
\begin{cases}
h''+2\frac{\varphi'h'}{\varphi}=0 \\
\rho-\frac{\lambda_{F}}{f^{2}}=0.\\
\end{cases}
\end{equation}
Now, we need to consider the case $\alpha_{k_{0}}=1$ and
$\alpha_{k}=0$ $\forall k\neq k_{0}$. In this case, equation
\eqref{eq:33} is trivially satisfied, and since equation
\eqref{eq:31} does not depend on the index $i$, we have that
equation \eqref{eq:31} is equivalent to equation $\eqref{eq2}$.
Finally, we need to show the validity of equation \eqref{eq:09} and
$\eqref{eq1}$. Observe that taking $i=k_{0}$, that is,
$\alpha_{k_{0}}=1$, in \eqref{eq:30}, we get
\begin{equation}
\begin{split}
\Big{[}(n-1)(2\varphi\varphi''\varepsilon_{k_{0}}-n(\varphi')^{2}\varepsilon_{k_{0}})+\frac{\lambda_{F}}{f^{2}}-\frac{2m}{f}(\varphi^{2}f''\varepsilon_{k_{0}}-(n-2)\varphi\varphi'f'\varepsilon_{k_{0}})+\\
-\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}\varepsilon_{k_{0}}-\rho\Big{]}\frac{\varepsilon_{k_{0}}}{\varphi^{2}}=h''+2\frac{\varphi'}{\varphi}h'-h'\frac{\varphi'}{\varphi}=h''+\frac{\varphi'}{\varphi}h'
\end{split}
\end{equation}
and for $i\neq k_{0}$, that is, $\alpha_{i}=0$, we have
| 4,048 | 26,288 |
en
|
train
|
0.4983.6
|
\begin{equation}\label{eq:30}
\begin{split}
\Big{[}(n-1)(2\varphi\varphi''\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-n(\varphi')^{2}\sum_{k}\varepsilon_{k}\alpha_{k}^{2})+\frac{\lambda_{F}}{f^{2}}-\frac{2m}{f}(\varphi^{2}f''\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-(n-2)\varphi\varphi'f'\sum_{k}\varepsilon_{k}\alpha_{k}^{2})+\\
-\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-\rho\Big{]}\frac{\varepsilon_{i}}{\varphi^{2}}=h''\alpha_{i}^{2}+2\alpha_{i}^{2}\frac{\varphi'}{\varphi}h'-\varepsilon_{i}h'\frac{\varphi'}{\varphi}\sum_{k}\varepsilon_{k}\alpha_{k}^{2}
\end{split}
\end{equation}
for $i\in\{1,2,\dots,n\}$, and
\begin{equation}\label{eq:31}
\begin{split}
(n-1)(2\varphi\varphi''\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-n(\varphi')^{2}\sum_{k}\varepsilon_{k}\alpha_{k}^{2})+\frac{\lambda_{F}}{f^{2}}-\frac{2m}{f}(\varphi^{2}f''\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-(n-2)\varphi\varphi'f'\sum_{k}\varepsilon_{k}\alpha_{k}^{2})+\\
-\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-\rho=\frac{\varphi^{2}}{f}f'h'\sum_{k}\varepsilon_{k}\alpha_{k}^{2}.
\end{split}
\end{equation}
If there exist $i,j$, $i\neq j$ such that $\alpha_{i}\alpha_{j}\neq
0$, then we get by equation \eqref{eq:33} that
\begin{equation}\label{eq:34}\left(h''+2\frac{h'\varphi'}{\varphi}\right)=0.
\end{equation}
It follows from \eqref{eq:34} that the equation \eqref{eq:30} is
summed to
\begin{equation}
\begin{split}
\Big{[}(n-1)(2\varphi\varphi''\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-n(\varphi')^{2}\sum_{k}\varepsilon_{k}\alpha_{k}^{2})+\frac{\lambda_{F}}{f^{2}}-\frac{2m}{f}(\varphi^{2}f''\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-(n-2)\varphi\varphi'f'\sum_{k}\varepsilon_{k}\alpha_{k}^{2})+\\
-\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}\sum_{k}\varepsilon_{k}\alpha_{k}^{2}-\rho\Big{]}\frac{\varepsilon_{i}}{\varphi^{2}}=-\varepsilon_{i}h'\frac{\varphi'}{\varphi}\sum_{k}\varepsilon_{k}\alpha_{k}^{2}.
\end{split}
\end{equation}
Thus, isolating the term
$\sum_{k}\varepsilon_{k}\alpha_{k}^{2}=\varepsilon_{k_{0}}$, we
obtain the equation \eqref{eq1}.
In the same way, isolating the term
$\sum_{k}\varepsilon_{k}\alpha_{k}^{2}=\varepsilon_{k_{0}}$ in
\eqref{eq:31} we obtain the equation \eqref{eq2}.
Thus, if
$\sum_{k}\varepsilon_{k}\alpha_{k}^{2}=\varepsilon_{k_{0}}$, then we
obtain equations \eqref{eq:09}, \eqref{eq1} and \eqref{eq2}. In the
case \newline$\sum_{k}\varepsilon_{k}\alpha_{k}^{2}=0$, we easily see that
the equation \eqref{eq:09} is summed to
\begin{equation}
\begin{cases}
h''+2\frac{\varphi'h'}{\varphi}=0 \\
\rho-\frac{\lambda_{F}}{f^{2}}=0.\\
\end{cases}
\end{equation}
Now, we need to consider the case $\alpha_{k_{0}}=1$ and
$\alpha_{k}=0$ $\forall k\neq k_{0}$. In this case, equation
\eqref{eq:33} is trivially satisfied, and since equation
\eqref{eq:31} does not depend on the index $i$, we have that
equation \eqref{eq:31} is equivalent to equation $\eqref{eq2}$.
Finally, we need to show the validity of equation \eqref{eq:09} and
$\eqref{eq1}$. Observe that taking $i=k_{0}$, that is,
$\alpha_{k_{0}}=1$, in \eqref{eq:30}, we get
\begin{equation}
\begin{split}
\Big{[}(n-1)(2\varphi\varphi''\varepsilon_{k_{0}}-n(\varphi')^{2}\varepsilon_{k_{0}})+\frac{\lambda_{F}}{f^{2}}-\frac{2m}{f}(\varphi^{2}f''\varepsilon_{k_{0}}-(n-2)\varphi\varphi'f'\varepsilon_{k_{0}})+\\
-\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}\varepsilon_{k_{0}}-\rho\Big{]}\frac{\varepsilon_{k_{0}}}{\varphi^{2}}=h''+2\frac{\varphi'}{\varphi}h'-h'\frac{\varphi'}{\varphi}=h''+\frac{\varphi'}{\varphi}h'
\end{split}
\end{equation}
and for $i\neq k_{0}$, that is, $\alpha_{i}=0$, we have
\begin{equation}
\begin{split}
\Big{[}(n-1)(2\varphi\varphi''\varepsilon_{k_{0}}-n(\varphi')^{2}\varepsilon_{k_{0}})+\frac{\lambda_{F}}{f^{2}}-\frac{2m}{f}(\varphi^{2}f''\varepsilon_{k_{0}}-(n-2)\varphi\varphi'f'\varepsilon_{k_{0}})+\\
-\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}\varepsilon_{k_{0}}-\rho\Big{]}\frac{\varepsilon_{i}}{\varphi^{2}}=-\varepsilon_{i}\varepsilon_{k_{0}}\frac{\varphi'}{\varphi}h'.
\end{split}
\end{equation}
However, this equations are equivalent to equations \eqref{eq:09}
and $\eqref{eq1}$. This complete the proof of Theorem \ref{eq4}.
\end{myproof}
\begin{myproof}[\textbf{Proof of Corollary \ref{cor1.7}}]By Theorem \ref{eq4}, we have that $M$ is a gradient
Yamabe soliton with potential function $h$ if, and only if,
\begin{equation}
\begin{cases}
h''+2\frac{\varphi'h'}{\varphi}=0 \\
\rho-\frac{\lambda_{F}}{f^{2}}=0.\\
\end{cases}
\end{equation}
Thus, we have that $\lambda_{F}$ and $\rho$ always
have the same signal. Therefore, there is no gradient Yamabe soliton
$M$ expanding/shrinking with fiber trivial gradient Yamabe soliton
shrinking/expanding.
\end{myproof}
\begin{myproof}[\textbf{Proof of Theorem \ref{eq5}}]Since $\lambda_{F}=\rho=0$ we have by equation \eqref{eq1} and \eqref{eq2}
of Theorem \ref{eq4} that
\begin{equation}\varphi'h'\varphi=-\frac{\varphi^{2}}{f}f'h'\nonumber
\end{equation}
and by condition $h'\neq0$, we obtain
\begin{equation}\label{eq14}\frac{\varphi'}{\varphi}=-\frac{f'}{f}.
\end{equation}
Integrating this equation we have
$$f(\xi)=\frac{e^{c}}{\varphi(\xi)}$$
for some $c\in\mathbb{R}$, which is equation \eqref{eqa} of Theorem \ref{eq5}.
Integrating the equation \eqref{eq:09}, we have that
\begin{equation}\label{eq19}h'(\xi)=\frac{\alpha}{\varphi^{2}(\xi)}
\end{equation}
for some $\alpha\neq0$, and
$$h(\xi)=\alpha\int\frac{1}{\varphi^{2}(\xi)}d\xi$$
which is equation \eqref{eqb} of Theorem \ref{eq5}.
Substituting equation \eqref{eq19} into \eqref{eq1}
we have
\begin{equation}\label{eq15}(n-1)(2\varphi\varphi''-n(\varphi')^{2})-2\frac{m}{f}(\varphi^{2}f''-(n-2)\varphi\varphi'f')-\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}+\alpha\frac{\varphi'}{\varphi}=0.
\end{equation}
Inserting equation \eqref{eq14} into \eqref{eq15} we obtain
\begin{equation}\label{eq16}\varphi\varphi''-\frac{(n+m)}{2}(\varphi')^{2}+\frac{\alpha}{2(n+m-1)}\frac{\varphi'}{\varphi}=0.
\end{equation}
Consider $\varphi(\xi)^{1-\frac{n+m}{2}}=\omega(\xi)$, then
\begin{equation}\omega'(\xi)=(1-\frac{n+m}{2})\varphi^{-\frac{n+m}{2}}\varphi',\hspace{0,5cm}\omega''(\xi)=(1-\frac{m+n}{2})\left(\varphi^{-\frac{n+m}{2}-1}(\varphi\varphi''-\frac{(n+m)}{2}(\varphi')^{2})\right)
\end{equation}
and we obtain that the differential equation \eqref{eq16} is
equivalent to
\begin{equation}\label{eq17}\omega''(\xi)+\frac{\alpha}{2(n+m-1)}\omega'(\xi)\omega(\xi)^{\frac{4}{n+m-2}}=0.
\end{equation}
Integrating equation \eqref{eq17} we have
$$\omega'(\xi)+\frac{\alpha(n+m-2)}{2(n+m+2)(n+m-1)}\omega(\xi)^{\frac{n+m+2}{n+m-2}}=\beta,\hspace{0,3cm}\beta\in\mathbb{R}.
$$
Thus,
\begin{equation}-\int\frac{1}{\frac{\alpha(n+m-2)}{2(n+m+2)(n+m-1)}\omega(\xi)^{\frac{n+m+2}{n+m-2}}-\beta}d\omega=\xi+\nu\nonumber
\end{equation}
and then
\begin{equation}(n+m-1)(n+m+2)\int\frac{\varphi d\varphi}{\alpha-\frac{2\beta(n+m-1)(n+m+2)}{n+m-2}\varphi^{\frac{n+m}{2}+1}}=\xi+\nu\nonumber
\end{equation}
which is equation \eqref{eqc} of Theorem \ref{eq5}. Then we prove the
necessary condition. Now a direct calculation shows us the converse
implication. This concludes the proof of Theorem \ref{eq5}.
\end{myproof}
\begin{myproof}[\textbf{Proof of Theorem \ref{eq7}}]Since $h'=0$ and $\lambda_{F}=\rho=0$ we have by equation \eqref{eq1} and \eqref{eq2} of Theorem \ref{eq4} that
\begin{equation}(n-1)(2\varphi\varphi''-n(\varphi')^{2})-2\frac{m}{f}(\varphi^{2}f''-(n-2)\varphi\varphi'f')-\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}=0\nonumber
\end{equation}
which is equivalent to
\begin{equation}\left(\frac{f'}{f}-\frac{(n-2)}{(m+1)}\frac{\varphi'}{\varphi}\right)^{2}+\frac{2}{m+1}\left(\frac{f'}{f}-\frac{(n-2)}{(m+1)}\frac{\varphi'}{\varphi}\right)^{'}+\frac{(n+m-1)}{m(m+1)^{2}}\left(n(\frac{\varphi'}{\varphi})^{2}-2\frac{\varphi''}{\varphi}\right)=0.\nonumber
\end{equation}
Consider
$z=\frac{f'}{f}-\frac{(n-2)}{(m+1)}\frac{\varphi'}{\varphi}$, then
\begin{equation}\label{eq20}z^{2}+\frac{2}{m+1}z'+\frac{(n+m-1)}{m(m+1)^{2}}\left(n(\frac{\varphi'}{\varphi})^{2}-2\frac{\varphi''}{\varphi}\right)=0.
\end{equation}
Now, recall that the Ricatti differential equation is a differential
equation of the form
\begin{equation}\label{eq21}z(\xi)'=p(\xi)+q(\xi)z(\xi)+r(\xi)z(\xi)^{2}
\end{equation}
where $p$, $q$ and $r$ are smooth functions on $\mathbb{R}$, and by
Picard theorem we have that the solutions of \eqref{eq21} is given
by
$$z(\xi)=z_{p}(\xi)+\frac{e^{\int P(\xi)d\xi}d\xi}{-\int r(\xi)e^{\int P(\xi)d\xi}d\xi+c}$$
where $P(\xi)=q(\xi)+2z_{p}(\xi)r(\xi)$, $z_{p}(\xi)$ is a
particular solution of \eqref{eq21} and $c$ is a constant.
Observe that \eqref{eq20} is a Ricatti differential equation with
\begin{equation}q(\xi)=0,\hspace{0,2cm} r(\xi)=-\frac{m+1}{2}\hspace{0,2cm} \text{and}\hspace{0,2cm}
p(\xi)=-\frac{(n+m-1)}{2m(m+1)}\left(n(\frac{\varphi'}{\varphi})^{2}-2\frac{\varphi''}{\varphi}\right)
\end{equation}
Then we obtain
$$\frac{f'(\xi)}{f(\xi)}=\frac{(n-2)}{(m+1)}\frac{\varphi'(\xi)}{\varphi(\xi)}+z_{p}(\xi)+\frac{e^{-(m+1)\int z_{p}(\xi)d\xi}}{\frac{m+1}{2}\int e^{-(m+1)\int z_{p}(\xi)d\xi}d\xi+c}$$
and thus,
$$f=\varphi^{\frac{n-2}{m+1}}e^{\int z_{p}d\xi}\left(\int e^{-(m+1)\int z_{p}d\xi}d\xi+\frac{2}{m+1}C\right)^{\frac{2}{m+1}}$$
where $z_{p}(\xi)$ is a particular solution of \eqref{eq20}. This
expression is equation \eqref{eq22} of Theorem \ref{eq:02}.
Now, since $h'=0$, we have that $h(\xi)=constant$, which is equation
\eqref{eq23} of theorem 1.5. Then we prove the necessary condition.
Now a direct calculation shows us the converse implication. This
concludes the proof of Theorem \ref{eq:02}.
\end{myproof}
\begin{myproof}[\textbf{Proof of Theorem \ref{eq6}}]In this case, since $\lambda_{F}=\rho=0$, we have
by differential equation \eqref{eq:10} and \eqref{eq10} that
$$h(\xi)=\alpha\int\frac{1}{\varphi^{2}(\xi)}d\xi$$
for some $\alpha\neq0$ and $f$, $\varphi$ are arbitrary.
Then we prove the necessary condition. Now a direct calculation
shows us the converse implication. This concludes the proof of
Theorem \ref{eq6}.
| 4,051 | 26,288 |
en
|
train
|
0.4983.7
|
$$\omega'(\xi)+\frac{\alpha(n+m-2)}{2(n+m+2)(n+m-1)}\omega(\xi)^{\frac{n+m+2}{n+m-2}}=\beta,\hspace{0,3cm}\beta\in\mathbb{R}.
$$
Thus,
\begin{equation}-\int\frac{1}{\frac{\alpha(n+m-2)}{2(n+m+2)(n+m-1)}\omega(\xi)^{\frac{n+m+2}{n+m-2}}-\beta}d\omega=\xi+\nu\nonumber
\end{equation}
and then
\begin{equation}(n+m-1)(n+m+2)\int\frac{\varphi d\varphi}{\alpha-\frac{2\beta(n+m-1)(n+m+2)}{n+m-2}\varphi^{\frac{n+m}{2}+1}}=\xi+\nu\nonumber
\end{equation}
which is equation \eqref{eqc} of Theorem \ref{eq5}. Then we prove the
necessary condition. Now a direct calculation shows us the converse
implication. This concludes the proof of Theorem \ref{eq5}.
\end{myproof}
\begin{myproof}[\textbf{Proof of Theorem \ref{eq7}}]Since $h'=0$ and $\lambda_{F}=\rho=0$ we have by equation \eqref{eq1} and \eqref{eq2} of Theorem \ref{eq4} that
\begin{equation}(n-1)(2\varphi\varphi''-n(\varphi')^{2})-2\frac{m}{f}(\varphi^{2}f''-(n-2)\varphi\varphi'f')-\frac{m(m-1)}{f^2}\varphi^{2}(f')^{2}=0\nonumber
\end{equation}
which is equivalent to
\begin{equation}\left(\frac{f'}{f}-\frac{(n-2)}{(m+1)}\frac{\varphi'}{\varphi}\right)^{2}+\frac{2}{m+1}\left(\frac{f'}{f}-\frac{(n-2)}{(m+1)}\frac{\varphi'}{\varphi}\right)^{'}+\frac{(n+m-1)}{m(m+1)^{2}}\left(n(\frac{\varphi'}{\varphi})^{2}-2\frac{\varphi''}{\varphi}\right)=0.\nonumber
\end{equation}
Consider
$z=\frac{f'}{f}-\frac{(n-2)}{(m+1)}\frac{\varphi'}{\varphi}$, then
\begin{equation}\label{eq20}z^{2}+\frac{2}{m+1}z'+\frac{(n+m-1)}{m(m+1)^{2}}\left(n(\frac{\varphi'}{\varphi})^{2}-2\frac{\varphi''}{\varphi}\right)=0.
\end{equation}
Now, recall that the Ricatti differential equation is a differential
equation of the form
\begin{equation}\label{eq21}z(\xi)'=p(\xi)+q(\xi)z(\xi)+r(\xi)z(\xi)^{2}
\end{equation}
where $p$, $q$ and $r$ are smooth functions on $\mathbb{R}$, and by
Picard theorem we have that the solutions of \eqref{eq21} is given
by
$$z(\xi)=z_{p}(\xi)+\frac{e^{\int P(\xi)d\xi}d\xi}{-\int r(\xi)e^{\int P(\xi)d\xi}d\xi+c}$$
where $P(\xi)=q(\xi)+2z_{p}(\xi)r(\xi)$, $z_{p}(\xi)$ is a
particular solution of \eqref{eq21} and $c$ is a constant.
Observe that \eqref{eq20} is a Ricatti differential equation with
\begin{equation}q(\xi)=0,\hspace{0,2cm} r(\xi)=-\frac{m+1}{2}\hspace{0,2cm} \text{and}\hspace{0,2cm}
p(\xi)=-\frac{(n+m-1)}{2m(m+1)}\left(n(\frac{\varphi'}{\varphi})^{2}-2\frac{\varphi''}{\varphi}\right)
\end{equation}
Then we obtain
$$\frac{f'(\xi)}{f(\xi)}=\frac{(n-2)}{(m+1)}\frac{\varphi'(\xi)}{\varphi(\xi)}+z_{p}(\xi)+\frac{e^{-(m+1)\int z_{p}(\xi)d\xi}}{\frac{m+1}{2}\int e^{-(m+1)\int z_{p}(\xi)d\xi}d\xi+c}$$
and thus,
$$f=\varphi^{\frac{n-2}{m+1}}e^{\int z_{p}d\xi}\left(\int e^{-(m+1)\int z_{p}d\xi}d\xi+\frac{2}{m+1}C\right)^{\frac{2}{m+1}}$$
where $z_{p}(\xi)$ is a particular solution of \eqref{eq20}. This
expression is equation \eqref{eq22} of Theorem \ref{eq:02}.
Now, since $h'=0$, we have that $h(\xi)=constant$, which is equation
\eqref{eq23} of theorem 1.5. Then we prove the necessary condition.
Now a direct calculation shows us the converse implication. This
concludes the proof of Theorem \ref{eq:02}.
\end{myproof}
\begin{myproof}[\textbf{Proof of Theorem \ref{eq6}}]In this case, since $\lambda_{F}=\rho=0$, we have
by differential equation \eqref{eq:10} and \eqref{eq10} that
$$h(\xi)=\alpha\int\frac{1}{\varphi^{2}(\xi)}d\xi$$
for some $\alpha\neq0$ and $f$, $\varphi$ are arbitrary.
Then we prove the necessary condition. Now a direct calculation
shows us the converse implication. This concludes the proof of
Theorem \ref{eq6}.
\end{myproof}
\end{section}
\vskip0.8cm
\noindent
{Willian Isao Tokura} (e-mail: [email protected])\\[2pt]
Instituto de Matem\'atica e Estat\'istica\\
Universidade Federal de Goi\'as\\
74001-900-Goi\^ania-GO\\
Brazil\\
\noindent{Levi Adriano } (e-mail: [email protected])\\[2pt]
Instituto de Matem\'atica e Estat\'istica\\
Universidade Federal de Goi\'as\\
74001-900-Goi\^ania-GO\\
Brazil\\
\noindent{Romildo da Silva Pina} (e-mail: [email protected])\\[2pt]
Instituto de Matem\'atica e Estat\'istica\\
Universidade Federal de Goi\'as\\
74001-900-Goi\^ania-GO\\
Brazil
\end{document}
| 1,740 | 26,288 |
en
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.