set
stringclasses
1 value
id
stringlengths
5
9
chunk_text
stringlengths
1
115k
chunk_num_tokens
int64
1
106k
document_num_tokens
int64
58
521k
document_language
stringclasses
2 values
train
0.64.17
\section{Conclusion} We have reviewed a series of quantum optical memory protocols conceived to store information in atomic ensembles. Without providing an exhaustive review of the different systems and techniques, we propose to put the protocols into two categories, namely photon echo and {\it slow-light} memories. Our analysis is based on the significant differences of storage and retrieval dynamics. We have used a minimalist semi-classical Schr\"odinger-Maxwell model to describe the signal propagation and to evaluate the storage efficiency in atomic ensembles. The efficiency scaling allows to compare the different memory types but represents only one figure of merit.The applications in quantum information processing go beyond the simple analogy with classical memories where the signal is stored and retrieved. The different figures of merit should be considered in that prospect as the storage time, the bandwidth and the multimode capacity that we only superficially address when discussing the storage dynamics. In that sense, our contribution is mainly an introduction that can be pushed further to give a more complete comparison of the memories' performance. Our objective was essentially to give a fundamental vision of few protocols that we consider as archetypes and hopefully stimulate the proposition of new architectures. We have finally replaced our analysis in the context of the quantum storage by deriving a variety of criteria adapted for both continuous and discrete variables. We have developed a toy model for the interaction of light with an atomic ensemble to evaluate the outcome of various quantum optics measurements that can serve as benchmarks to certify the quantum nature of optical memories. We haven't insisted on the different material systems that physically represent the memory support. They all have in common to exhibit long lived (optical or spin) coherent states but they can cover different realities going from cold atomic vapors to luminescent impurities in solids as rare-earth doped insulators or excitons in semi-conductors holding a lot of promises in terms of integration. The portability of each protocol to a specific system would deserve a discussion by itself for which our analytic review of protocols can be seen as an introductory basis. \section*{Acknowledgments} Research at the University of Basel is supported by the Swiss National Science Foundation (SNSF) through the NCCR QSIT, the grant number PP00P2-150579 and the Army Research Laboratory Center for Distributed Quantum Information via the project SciNet. The work at Laboratoire Aimé Cotton received funding from the national grant ANR DISCRYS (ANR-14-CE26-0037-02), from Investissements d'Avenir du LabEx PALM ExciMol and ATERSIIQ (ANR-10-LABX-0039-PALM). The work at Laboratoire Pierre Agrain has been partially funded by the French National Research Agency (ANR) through the project SMEQUI. \appendix \section{Strong pulse propagation}\label{strong_pulse} Even if the standard 2PE presented in section \ref{2PE} is not appropriate for quantum storage, as an example, it illustrates a potential issue when strong pulses are used in echo sequences. The $\pi$-pulse as an element of the toolbox for quantum memories should be used with precaution. The best anticipated answer is certainly not to use the $\pi$-pulse as proposed in the controlled reversible inhomogeneous broadening protocol detailed section \ref{CRIB}. We here briefly discuss the propagation of strong $\pi$-pulses which appeared as critical element to understand the 2PE efficiencies as simulated in fig.\ref{fig:2PE_simul}. We have assumed the $\pi$-pulse sufficiently short to have a well-defined action of the stored coherence. In practice, the rephasing pulse should be much shorter than the signal. This condition should be maintained all along the propagation which is far from guaranteed. $\pi$-pulses are very singular in that sense because they maximally invert the atoms irremediably associated to the lost energy from the pulse. The requirement of energy conservation actually imposes a distortion of the pulse. There is no analytical solution to the propagation of strong pulses in absorbing media. Numerical simulations are then necessary to predict the exact pulse shape. That being said, the qualitatively analysis can be reinforced by invoking the McCall and Hahn Area Theorem \cite{area67, allen2012optical, Eberly:98}. This latter gives a remarkable conservation law for the pulse area though propagation as \begin{equation}gin{equation} \partial_z\theta(z)= -\displaystyle\frac{\alpha}{2} \sin\left(\theta(z)\right)\label{area} \end{equation} In the weak signal limit (small area \cite{crisp1970psa}), one retrieves the Bouguer-Beer-Lambert law for the area (eq.\ref{bouguer}) as expected in the perturbative regime. A $2\pi$-pulse typically undergo the so-called self-induced transparency (SIT) \cite{area67}. The shape preserving propagation \cite{allen2012optical} is not surprising in the light of the energy and area conservations of $2\pi$-pulses. Indeed, a $2\pi$ Rabi flopping of the atoms doesn't leave any energy in the population. Additionally, the $2\pi$-area is unaffected by the propagation as given by the singularities of eq.\eqref{area} (for any area as multiple of $\pi$). Along the same lines, a $\pi$-pulse conserves its area but not its energy. Pulse distortions are then expected to satisfy two contradictory conditions on the energy and the area. The pulse amplitude is reduced as the duration increases to preserve the total area. The energy scales as the pulse amplitude (multiplied by the area which is constant in that case) is then reduced. Theses considerations should not be underestimated when strong and more specifically $\pi$-pulses are used. To illustration the pulse distortion, we question the expression \eqref{etaPi} by performing a numerical simulation with different $\pi$-pulse durations (see fig.\ref{fig:2PE_simul}). The deviation for the expected scaling precisely comes the $\pi$-pulse distortion as observed in fig.\ref{fig:2PE_simul} (top). This is an intrinsic limitation of $\pi$-pulse when used in absorbing ensemble. This latter is fundamental and cannot be avoided by using a cavity to enhance the interaction with a weakly absorbing sample. Same distortions are expected in cavities \cite{gti}. The real alternative is the complex hyperbolic secant (CHS) pulse as discussed in section \ref{strong_pulse_rose}. This latter is not only robust under the experimental imperfections (as power fluctuation) but is also much less sensitive to propagation distortions \cite{Demeter}. Following our analysis, there is no constraint, as the area theorem, on the CHS as frequency swept pulses. \section{Photon counting measurements}\label{appendix:formulas_counts} We here give the detailed derivation of the formulas \eqref{autocorrelation}, \eqref{Cauchy_Schwarz} and \eqref{Bell} used in section \ref{counting_criterion}. We consider non photon-number resolving detectors with noise. Let $D_a(\eta_d)$ be the POVM element (positive-operator valued measure) associated to the event click when such a detector operates on a single mode of the electromagnetic field characterized by the annihilation $a$ and creation $a^\dag$ operators. Let $\eta_d$ be the efficiency of the detector and $p_{\text{dc}}$ the probability of a dark count. We have \begin{equation}gin{equation} D_a(\eta_d) = \mathbb{1} - (1-p_{\text{dc}})(1-\eta_d)^{a^\dag a}. \end{equation} We first focus on the setup presented in fig.\ref{Fig1} by assuming that the noise and efficiency of the two detectors are the same. The ratio between the twofold coincidences and the product of singles is given by \begin{equation}gin{equation} g_a^{(2)} = \frac{\langle D_{d_a}(\eta_d) D_{\bar{d}_a}(\eta_d)\rangle}{\langle D_{d_a}(\eta_d) \rangle \langle D_{\bar{d}_a}(\eta_d) \rangle}. \end{equation} Basic algebra using the relation between the modes $a,$ $d_a$ and $\bar{d}_a$ shows that $D_{d_a}(\eta_d) D_{\bar{d}_a}(\eta_d) = \mathbb{1}-2(1-p_{\text{dc}})(1-\eta_d/2)^{a^\dag a}+(1-p_{\text{dc}})^2(1-\eta_d)^{a^\dag a}$ and $D_{d_a}(\eta_d)=D_{a}(\eta_d/2)$. By including the memory efficiency in the detector efficiency, the ratio $g_a^{(2)} $ can be computed from \begin{equation}gin{equation} \label{auxg2} g_a^{(2)} =\frac{\langle 1| \mathbb{1}-2(1-p_{\text{dc}})(1-\eta/2)^{a^\dag a}+(1-p_{\text{dc}})^2(1-\eta)^{a^\dag a} |1 \rangle}{\langle 1| \mathbb{1}-2(1-p_{\text{dc}})(1-\eta/2)^{a^\dag a}|1 \rangle^2} \end{equation} with $\eta=\eta_d\eta_m.$ Using an exponential form for $(1-\eta)^{a^\dag a}$ and expanding as a Taylor series, we find \begin{equation}gin{equation} \label{meanvaluedet} \langle n| (1-\eta)^{a^\dag a} |n\rangle = (1-\eta)^{n}. \end{equation} Eq. \eqref{autocorrelation} is obtained by combining \eqref{auxg2} and \eqref{meanvaluedet}.\\ The expression for the Cauchy-Schwarz parameter is obtained from \begin{equation}gin{equation} R=\frac{\langle D_{d_a}(\eta_d)D_{d_b}(\eta_d)\rangle^2}{\langle D_{d_a}(\eta_d)D_{\bar{d}_a}(\eta_d) \rangle \langle D_{d_b}(\eta_d)D_{\bar{d}_b}(\eta_d) \rangle} \end{equation} which leads to \eqref{Cauchy_Schwarz} by using the following results \begin{equation}gin{align} \nonumber \text{tr} (\rho_a x^{a^\dag a})& = \frac{1-p}{1-px},\\ \text{tr} (\rho_{ab} x^{a^\dag a + b^\dag b}) &= \frac{1-p}{1-px^2} \end{align} where $\rho_{ab}$ is the density matrix associated to a two-mode vacuum squeezed state and $\rho_a = \text{tr}_b \rho_{ab}.$ The expression for the visibility of the interference pattern observed in the Bell test experiment is obtained by noting that twofold coincidences are maximum between orthogonal polarizations while the minimum is obtained between identical polarizations. Hence, the numerator of eq.\eqref{Bell} can be obtained by taking the difference between $$\langle \psi^-_{a_h a_v b_h b_v} | D_{a_h}(\eta_d) D_{b_v}(\eta_d) | \psi^-_{a_h a_v b_h b_v}\rangle$$ and $$\langle \psi^-_{a_h a_v b_h b_v} | D_{ah}(\eta_d) D_{bh}(\eta_d) | \psi^-_{a_h a_v b_h b_v}\rangle$$ while the denominator comes from the sum of these two expectation values. \\ \end{document}
2,838
46,743
en
train
0.65.0
\begin{document} \vskip 20pt MSC 34C10 \vskip 20pt \centerline{\bf Oscillatory and non oscillatory criteria for linear} \centerline{\bf four dimensional hamiltonian systems } \vskip 20 pt \centerline{\bf G. A. Grigorian} \centerline{\it Institute of Mathematics NAS of Armenia} \centerline{\it E -mail: [email protected]} \vskip 20 pt \noindent Abstract. The Riccati equation method is used for study the oscillatory and non oscillatory behavior of solutions of linear four dimensional hamiltonian systems. An oscillatory and three non oscillatory criteria are proved. On examples the obtained results are compared with some well known ones. \vskip 20 pt Key words: Riccati equation, oscillation, non oscillation, conjoined (prepared, preferred) solution, Liuville's formula. \vskip 20 pt {\bf 1. Introduction.} Let $A(t) \equiv \bigl(a_{jk}(t)\bigr)_{j,k=1}^2,\phantom{a} B(t)\equiv \bigl(b_{jk}(t)\bigr)_{j,k=1}^2, \phantom{a} C(t)\equiv \phantom{a} \bigl(c_{jk}(t)\bigr)_{j,k=1}^2, \linebreak t\ge t_0$, be complex valued continuous matrix functions on $[t_0;+\infty)$ and let $B(t)$ and $C(t)$ be Hermitian, i.e., $B(t) = B^*(t), \phantom{a} C(t) = C^*(t), \phantom{a} t\ge t_0$. Consider the four dimensional hamiltonian system $$ \sist{\phantom{a}i'= A(t)\phantom{a}i + B(t)\psi;}{\psi' = C(t)\phantom{a}i - A^*(t)\psi, \phantom{a}h t\ge t_0.} \eqno (1.1) $$ Here $\phantom{a}i = (\phantom{a}i_1, \phantom{a}i_2), \phantom{a} \psi = (\psi_1, \psi_2)$ are the unknown continuously differentiable vector functions on $[t_0;+\infty)$. Along with the system (1.1) consider the linear system of matrix equations $$ \sist{\Phi'= A(t)\Phi + B(t)\Psi;}{\Psi' = C(t)\Phi - A^*(t)\Psi, \phantom{a}h t\ge t_0,} \eqno (1.2) $$ Where $\Phi(t)$ and $\Psi(t)$ are the unknown continuously differentiable matrix functions of dimension $2\times 2$ on $[t_0;+\infty)$. {\bf Definition 1.1}. {\it A solution $(\Phi(t), \Psi(t))$ of the system (1.2) is called conjoined (or prepared, preferred) if $\Phi^*(t)\Psi(t) = \Psi^*(t)\Phi(t), \phantom{a} t\ge t_0$.} {\bf Definition 1.2.} {\it A solution $(\Phi(t), \Psi(t))$ of the system (1.1) is called oscillatory if $\det \Phi(t)$ has arbitrary large zeroes.} {\bf Definition 1.3} {\it The system (1.1) is called oscillatory if all conjoined solutions of the system (1.2) are oscillatory, otherwise it is called non oscillatory}. Study of the oscillatory and non oscillatory behavior of hamiltonian systems (in particular of the system (1.1)) is an important problem of qualitative theory of differential equations and many works are devoted to it (see e.g., [1 - 10] and cited works therein). For any Hermitian matrix $H$ the nonnegative (positive) definiteness of it we denote by $H \ge 0, \phantom{a} (H>0$). In the works [1 - 9] the oscillatory behavior of general hamiltonian systems is studied under the condition that the coefficient corresponding to $B(t)$ is assumed to be positive definite. In this paper we study the oscillatory and non oscillatory behavior of the system (1.1) in the direction that the assumption $B(t) > 0, \phantom{a} t\ge t_0,$ may be destroyed. {\bf 2. Auxiliary propositions}. Let $f(t), \phantom{a} g(t), \phantom{a} h(t), \phantom{a} h_1(t)$ be real valued continuous functions on $[t_0;+\infty)$. Consider the Riccati equations $$ y' + f(t) y^2 + g(t) y + h(t) = 0, \phantom{a}h t\ge t_0; \eqno (2.1) $$ $$ y' + f(t) y^2 + g(t) y + h_1(t) = 0, \phantom{a}h t\ge t_0; \eqno (2.2) $$ {\bf Theorem 2.1}. {\it Let Eq. (2.2) has a real valued solution $y_1(t)$ on $[t_1;t_2) \phantom{a} (t_0\le t_1 < t_2 \le +\infty)$, and let $f(t) \ge 0, \phantom{a} h(t) \le h_1(t), \phantom{a} t\in [t_1;t_2)$. Then for each $y_{(0)} \ge y_1(t_0)$ Eq. (2.1) has the solution $y_0(t)$ on $[t_1;t_2)$ with $y_0(t_0) = y_{(0)}$, and $y_0(t) \ge y_1(t), \phantom{a} t\in [t_1;t_2)$.} A proof for a more general theorem is presented in [11] (see also [12]). Denote: $I_{g,h}(\xi;t) \equiv \il{\xi}{t} \exp\biggl\{-\il{\tau}{t}g(s) d s\biggr\} h(\tau) d \tau, \phantom{a} t\ge \xi \ge t_0.$ Let $t_0 < \tau_0 \le + \infty$ and let $t_0 < t_1 < ... $ be a finite or infinite sequence such that $t_k \in [t_0;\tau_0], \phantom{a} k=1,2, ...$ We assume that if $\{t_k\}$ is finite then the maximum of $t_k$ is equal to $\tau_0$ and if $\{t_k\}$ is infinite then $\lim\limits_{k\to +\infty} t_k = \tau_0$. {\bf Theorem 2.2.} {\it Let $f(t) \ge 0, \phantom{a} t\in [t_0; \tau_0), \phantom{a} t\in [t_0; \tau_0)$, and $$ \il{t_k}{t}\exp\biggl\{\il{t_k}{\tau}\bigl[g(s) - I_{g,h}(t_k;s)\bigr]d s\biggr\} h(\tau) d \tau \le 0, \phantom{a} t\in [t_k;t_{k+1}), \phantom{a} k=0,1, .... $$ Then for every $y_{(0)} \ge 0$ Eq. (2.1) has the solution $y_0(t)$ on $[t_0;\tau_0)$ satisfying the initial condition $y_0(t_0) = y_{(0)}$ and $y_0(t) \ge 0, \phantom{a} t\in [t_0; \tau_0)$.} See the proof in [12]. Consider the matrix Riccati equation $$ Z' + Z B(t) Z + A^*(t) Z + Z A(t) - C(t) = 0, \phantom{a}h t\ge t_0. \eqno (2.3) $$ The solutions $Z(t)$ of this equation existing on an interval $[t_1; t_2) (t_0 \le t_1 < t_2 \le +\infty)$ are connected with solutions $(\phantom{a}i(t), \Psi(t))$ of the system (1.2) by relations (see [10]): $$ \Phi'(t) = [A(t) + B(t) Z(t)] \Phi(t), \phantom{a} \Phi(t_1) \ne 0, \phantom{a} \Psi(t) = Z(t) \Phi(t), \phantom{a} t\in [t_1; t_2). \eqno (2.4) $$ Let $Z_0(t)$ be a solution to Eq. (2.3) on $[t_1; t_2)$. {\bf Definition.} {\it We will say that $[t_1; t_2)$ is the maximum existence interval for $Z_0(t)$ if $Z_0(t)$ cannot be continued to the right of $t_2$ as a solution of Eq. (2.3).} {\bf Lemma 2.1}. {\t Let $Z_0(t)$ be a solution of Eq. (2.3) on $[t_1;t_2)$ and let $t_2 < +\infty$. Then $[t_1;t_2)$ cannot be the maximum existence interval for $Z_0(t)$ provided the function $G(t) \equiv \il{t_1}{t}tr [B(\tau) Z_0(\tau)]d\tau, \phantom{a} t\in [t_1; t_2)$, is bounded from below on $[t_1; t_2)$.} Proof. By analogy of the proof of Lemma 2.1 from [10].
2,504
26,753
en
train
0.65.1
{\bf 2. Auxiliary propositions}. Let $f(t), \phantom{a} g(t), \phantom{a} h(t), \phantom{a} h_1(t)$ be real valued continuous functions on $[t_0;+\infty)$. Consider the Riccati equations $$ y' + f(t) y^2 + g(t) y + h(t) = 0, \phantom{a}h t\ge t_0; \eqno (2.1) $$ $$ y' + f(t) y^2 + g(t) y + h_1(t) = 0, \phantom{a}h t\ge t_0; \eqno (2.2) $$ {\bf Theorem 2.1}. {\it Let Eq. (2.2) has a real valued solution $y_1(t)$ on $[t_1;t_2) \phantom{a} (t_0\le t_1 < t_2 \le +\infty)$, and let $f(t) \ge 0, \phantom{a} h(t) \le h_1(t), \phantom{a} t\in [t_1;t_2)$. Then for each $y_{(0)} \ge y_1(t_0)$ Eq. (2.1) has the solution $y_0(t)$ on $[t_1;t_2)$ with $y_0(t_0) = y_{(0)}$, and $y_0(t) \ge y_1(t), \phantom{a} t\in [t_1;t_2)$.} A proof for a more general theorem is presented in [11] (see also [12]). Denote: $I_{g,h}(\xi;t) \equiv \il{\xi}{t} \exp\biggl\{-\il{\tau}{t}g(s) d s\biggr\} h(\tau) d \tau, \phantom{a} t\ge \xi \ge t_0.$ Let $t_0 < \tau_0 \le + \infty$ and let $t_0 < t_1 < ... $ be a finite or infinite sequence such that $t_k \in [t_0;\tau_0], \phantom{a} k=1,2, ...$ We assume that if $\{t_k\}$ is finite then the maximum of $t_k$ is equal to $\tau_0$ and if $\{t_k\}$ is infinite then $\lim\limits_{k\to +\infty} t_k = \tau_0$. {\bf Theorem 2.2.} {\it Let $f(t) \ge 0, \phantom{a} t\in [t_0; \tau_0), \phantom{a} t\in [t_0; \tau_0)$, and $$ \il{t_k}{t}\exp\biggl\{\il{t_k}{\tau}\bigl[g(s) - I_{g,h}(t_k;s)\bigr]d s\biggr\} h(\tau) d \tau \le 0, \phantom{a} t\in [t_k;t_{k+1}), \phantom{a} k=0,1, .... $$ Then for every $y_{(0)} \ge 0$ Eq. (2.1) has the solution $y_0(t)$ on $[t_0;\tau_0)$ satisfying the initial condition $y_0(t_0) = y_{(0)}$ and $y_0(t) \ge 0, \phantom{a} t\in [t_0; \tau_0)$.} See the proof in [12]. Consider the matrix Riccati equation $$ Z' + Z B(t) Z + A^*(t) Z + Z A(t) - C(t) = 0, \phantom{a}h t\ge t_0. \eqno (2.3) $$ The solutions $Z(t)$ of this equation existing on an interval $[t_1; t_2) (t_0 \le t_1 < t_2 \le +\infty)$ are connected with solutions $(\phantom{a}i(t), \Psi(t))$ of the system (1.2) by relations (see [10]): $$ \Phi'(t) = [A(t) + B(t) Z(t)] \Phi(t), \phantom{a} \Phi(t_1) \ne 0, \phantom{a} \Psi(t) = Z(t) \Phi(t), \phantom{a} t\in [t_1; t_2). \eqno (2.4) $$ Let $Z_0(t)$ be a solution to Eq. (2.3) on $[t_1; t_2)$. {\bf Definition.} {\it We will say that $[t_1; t_2)$ is the maximum existence interval for $Z_0(t)$ if $Z_0(t)$ cannot be continued to the right of $t_2$ as a solution of Eq. (2.3).} {\bf Lemma 2.1}. {\t Let $Z_0(t)$ be a solution of Eq. (2.3) on $[t_1;t_2)$ and let $t_2 < +\infty$. Then $[t_1;t_2)$ cannot be the maximum existence interval for $Z_0(t)$ provided the function $G(t) \equiv \il{t_1}{t}tr [B(\tau) Z_0(\tau)]d\tau, \phantom{a} t\in [t_1; t_2)$, is bounded from below on $[t_1; t_2)$.} Proof. By analogy of the proof of Lemma 2.1 from [10]. Assume $B(t) = diag \{b_1(t), b_2(t)\}, \phantom{a} t\ge t_0$. Then it is not difficult to verify that for Hermitian unknowns $Z=\begin{pmatrix}z_{11} & z_{12}\\ \overline{z}_{12} & z_{22}\end{pmatrix}$ Eq. (2.3) is equivalent to the following nonlinear system $$ \left\{ \begin{array}{l} z'_{11} + b_1(t) z^2_{11} + 2 Re a_{11}(t) z_{11} + b_2(t)|z_{12}|^2 + a_{21}(t) z_{12} + \overline{a}_{21}(t) \overline{z}_{12} - c_{11}(t) = 0;\\ z'_{12} + [b_1(t) z_{11} + b_2(t) z_{22} + \overline{a}_{11}(t) + a_{22}(t)] z_{12} + \\ \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa}+ a_{12}(t) z_{11} + a_{21}(t) z_{22} - c_{12}(t) = 0;\\ z'_{22} + b_2(t) z_{22}^2 + 2 Re a_{22}(t) z_{22} + b_1(t)|z_{12}|^2 + \overline{a}_{12}(t) z_{12} + a_{12}(t) \overline{z}_{12} - c_{22}(t) = 0, \end{array} \right. \eqno (2.5) $$ $t\ge t_0.$ If $b_2(t) \ne 0, \phantom{a} t\ge t_0,$ then it is not difficult to verify that the first equation of the system (2.5) can be rewritten in the form $$ z'_{11} + b_1(t) z^2_{11} + 2 Re a_{11}(t) z_{11} + b_2(t)\left|z_{12} + \frac{\overline{a}_{21}(t)}{b_2(t)}\right|^2 - \frac{|a_{21}(t)|^2}{b_2(t)} - c_{11}(t) = 0, \phantom{a} t\ge t_0, \eqno (2.6) $$ and if in addition $\overline{a}_{21}(t)/ b_2(t)$ is continuously differentiable on $[t_0; +\infty)$ then by the substitution $$ z_{12} = y - \frac{\overline{a}_{21}(t)}{b_2(t)}, \phantom{a}h t \ge t_0, \eqno (2.7) $$ in the first and second equations of the system (2.5) we get the subsystem $$ \left\{ \begin{array}{l} z'_{11} + b_1(t) z^2_{11} + 2 Re a_{11}(t) z_{11} + b_2(t)|y|^2 - \frac{|a_{21}(t)|^2}{b_2(t)} - c_{11}(t) = 0\\ y' + [b_1(t) z_{11} + b_2(t) z_{22} + \overline{a}_{11}(t) + a_{22}(t)]y + \bigl(a_{12}(t) - \frac{b_1(t)}{b_2(t)} \overline{a}_{21}(t)\bigr) z_{11}-\\ \phantom{a}antom{aaaaaaaaaaaaaaaaaa} - \bigl(\frac{\overline{a}_{21}(t)}{b_2(t)}\bigr)' - \frac{\overline{a}_{21}(t)}{b_2(t)}\bigl(\overline{a}_{11}(t) + a_{22}(t)\bigr) - c_{12}(t) = 0, \phantom{a} t\ge t_0. \end{array} \right. \eqno (2.8) $$ Analogously if $b_1(t) \ne 0, \phantom{a} t\ge t_0,$ then the third equation of the system (2.5) can be rewritten in the form $$ z'_{22} + b_2(t) z^2_{22} + 2 Re a_{22}(t) z_{22} + b_1(t)\left|z_{12} + \frac{a_{12}(t)}{b_1(t)}\right|^2 - \frac{|a_{12}(t)|^2}{b_1(t)} - c_{22}(t) = 0, \phantom{a} t\ge t_0, \eqno (2.9) $$ and if in addition $a_{12}(t)/ b_1(t)$ is continuously differentiable on $[t_0; +\infty)$ then by the substitution $$ z_{12} = v - \frac{a_{12}(t)}{b_1(t)}, \phantom{a}h t \ge t_0, \eqno (2.10) $$ in the second and third equations of the system (2.5) we obtain the subsystem $$ \left\{ \begin{array}{l} z'_{22} + b_2(t) z^2_{22} + 2 Re a_{22}(t) z_{22} + b_1(t)|v|^2 - \frac{|a_{12}(t)|^2}{b_1(t)} - c_{22}(t) = 0\\ y' + [b_1(t) z_{11} + b_2(t) z_{22} + \overline{a}_{11}(t) + a_{22}(t)]v + \bigl(\overline{a}_{21}(t) - \frac{b_2(t)}{b_1(t)} a_{12}(t)\bigr) z_{22}-\\ \phantom{a}antom{aaaaaaaaaaaaaaaaa} - \bigl(\frac{a_{12}(t)}{b_1(t)}\bigr)' - \frac{a_{12}(t)}{b_1(t)}\bigl(\overline{a}_{11}(t) + a_{22}(t)\bigr) - c_{12}(t) = 0, \phantom{a} t\ge t_0. \end{array} \right. \eqno (2.11) $$ If $(z_{11}(t), y(t))$ is a solution of the subsystem (2.8) on $[t_0;t_1) (t_0 < t_1 \le + \infty)$ with $y(t_0) = 0$ and $(z_{22}(t), v(t))$ is a solution of the subsystem (2.11) on $[t_0;t_1)$ with $v(t_0)=0$ then by Cauchi formula from the second equation of the subsystem (2.8) and from the second equation of the subsystem (2.11) we have respectively: $$ y(t) = - \exp\biggl\{-\il{t_0}{t}b_1(\tau)z_{11}(\tau)d\tau\biggr\}\il{t_0}{t}\biggl[\exp\biggl\{\il{t_0}{\tau}b_1(s) z_{11}(s)d s\biggr\}\biggr]'\biggl(\frac{a_{12}(\tau)}{b_1(\tau)} - \frac{\overline{a}_{21}(\tau)}{b_2(\tau)}\biggr)\times $$ $$ \times\exp\biggl\{-\il{\tau}{t}\bigl(b_2(s) z_{22}(s) + \overline{a}_{11}(s) + a_{22}(s)\bigr)ds\biggr\}d\tau + $$ $$ +\il{t_0}{t}\exp\biggl\{-\il{\tau}{t}\bigl(b_1(s)z_{11}(s) + b_2(s) z_{22}(s) + \overline{a}_{11}(s) + a_{22}(s)\bigr)ds\biggr\}\biggl[\biggl(\frac{\overline{a}_{21}(\tau)}{b_2(\tau)}\biggr)' +\phantom{a}antom{aaaaaaaaa} $$ $$ \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa}+\frac{\overline{a}_{21}(\tau)}{b_2(\tau)}\bigl(\overline{a}_{11}(\tau) + a_{22}(\tau)\bigr) + c_{12}(\tau)\biggr]d\tau, $$ $$ v(t) = - \exp\biggl\{-\il{t_0}{t}b_2(\tau)z_{22}(\tau)d\tau\biggr\}\il{t_0}{t}\biggl[\exp\biggl\{\il{t_0}{\tau}b_2(s) z_{22}(s)d s\biggr\}\biggr]'\biggl(\frac{\overline{a}_{21}(\tau)}{b_2(\tau)} - \frac{a_{12}(\tau)}{b_1(\tau)} \biggr)\times $$ $$ \times\exp\biggl\{-\il{\tau}{t}\bigl(b_1(s) z_{11}(s) + \overline{a}_{11}(s) + a_{22}(s)\bigr)ds\biggr\}d\tau + $$ $$ +\il{t_0}{t}\exp\biggl\{-\il{\tau}{t}\bigl(b_1(s)z_{11}(s) + b_2(s) z_{22}(s) + \overline{a}_{11}(s) + a_{22}(s)\bigr)ds\biggr\}\biggl[\biggl(\frac{a_{12}(\tau)}{b_1(\tau)}\biggr)' +\phantom{a}antom{aaaaaaaaa} $$ $$ \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaa}+\frac{a_{12}(\tau)}{b_1(\tau)}\bigl(\overline{a}_{11}(\tau) + a_{22}(\tau)\bigr) + c_{12}(\tau)\biggr]d\tau, \phantom{a} t\in [t_0;t_1). $$ From here it is easy to derive
4,034
26,753
en
train
0.65.2
{\bf Lemma 2.2}. {\it Let $b_j(t) > 0, \phantom{a} j=1,2,$ the functions $a_{12}(t)/b_1(t), \phantom{a} \overline{a}_{21}(t)/b_2(t)$ be continuously differentiable on $[t_0; t_1) (t_0 < t_1 < + \infty))$ and let $(z_{11}(t), y(t))$ and $(z_{22}(t), v(t))$ be solutions of the subsystems (2.8) and (2.11) respectively on $[t_0; t_1)$ such that $z_{jj}(t) \ge~ 0, \linebreak t\in ~ [t_0;t_1), \phantom{a} j=1,2, \phantom{a} y(t_0) = v(t_0) = 0$. Then $$ |y(t)| \le \mathfrak{M}(t) + \il{t_0}{t}\biggl|\exp\biggl\{-\il{\tau}{t}\bigl(\overline{a}_{11}(s) + a_{22}(s)\bigr)ds\biggr\}\biggl[\biggl(\frac{\overline{a}_{21}(\tau)}{b_2(\tau)}\biggr)'+ $$ $$ \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa}+\frac{\overline{a}_{21}(\tau)}{b_2(\tau)}\bigl(\overline{a}_{11}(\tau) + a_{22}(\tau)\bigr)+ + c_{12}(\tau)\biggr]\biggr|d\tau, $$ $$ |v(t)| \le \mathfrak{M}(t) + \il{t_0}{t}\biggl|\exp\biggl\{-\il{\tau}{t}\bigl(\overline{a}_{11}(s) + a_{22}(s)\bigr)ds\biggr\}\biggl[\biggl(\frac{a_{12}(\tau)}{b_1(\tau)}\biggr)'+ $$ $$ \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaa}+\frac{a_{12}(\tau)}{b_1(\tau)}\bigl(\overline{a}_{11}(\tau) + a_{22}(\tau)\bigr)+ + c_{12}(\tau)\biggr]\biggr|d\tau, \phantom{a} t\in [t_0;t_1), $$ where $$ \mathfrak{M}(t)\equiv \max\limits_{\tau\in [t_0; t]}\biggl|\exp\biggl\{-\il{\tau}{t}\bigl(\overline{a}_{11}(s) + a_{22}(s)\bigr)ds\biggr\}\biggl(\frac{a_{12}(\tau)}{b_1(\tau)} - \frac{\overline{a}_{21}(\tau)}{b_2(\tau)}\biggr)\biggr|, \phantom{a} t\ge t_0. $$ } \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa}$\Box$ {\bf Lemma 2.3.} {\it For any two square matrices $M_1\equiv (m_{ij}^1)_{ij=1}^n, \phantom{a} M_2\equiv (m_{ij}^2)_{ij=1}^n$ the equality $$ tr (M_1 M_2) = tr (M_2 M_1) $$ is valid.} Proof. We have: $tr (M_1 M_2) = \sum\limits_{j=1}^n(\sum\limits_{k=1}^n m_{jk}^1 m_{kj}^2) = \sum\limits_{k=1}^n(\sum\limits_{j=1}^n m_{jk}^1 m_{kj}^2) = \sum\limits_{k=1}^n(\sum\limits_{j=1}^n m_{kj}^2 m_{jk}^1) = tr (M_2 M_1).$ The lemma is proved. {\bf 3. Main results}. Let $f_{jk}(t), \phantom{a} j,k =1,2, \phantom{a} t\ge t_0,$ be real valued continuous functions on $[t_0; +\infty)$. Consider the linear system of equations $$ \sist{\phantom{a}i_1' = f_{11}(t) \phantom{a}i_1 + f_{12}(t) \psi_1;}{\psi_1' = f_{21}(t) \phantom{a}i_1 + f_{22}(t) \psi_1, \phantom{a} t\ge t_0,} \eqno (3.1) $$ and the Riccati equation $$ y' + f_{12}(t) y^2 + [f_{11}(t) - f_{22}(t)] y - f_{12}(t) = 0, \phantom{a}h t\ge t_0. \eqno (3.2) $$ All solutions $y(t)$ of the last equation, existing on some interval $[t_1; t_2)\hskip 2pt (t_0 \le t_1 < t_2 \le + \infty)$ are connected with solutions $(\phantom{a}i_1(t), \psi_1(t))$ of the system (3.1) by relations (see [13]): $$ \phantom{a}i_1(t) = \phantom{a}i_1(t_1)\exp\biggl\{\il{t_1}{t}\bigl[f_{12}(\tau) y(\tau) + f_{11}(\tau)\bigr]d\tau\biggr\}, \phantom{a} \phantom{a}i_1(t_1)\ne 0, \phantom{a} \psi_1(t) = y(t) \phantom{a}i_1(t), \eqno (3.3) $$ $t\in [t_1; t_2).$ {\bf Definition 3.1.} {\it The system (3.1) is called oscillatory if for its every solution \linebreak $(\phantom{a}i_1(t), \psi_1(t))$ the function $\phantom{a}i_1(t)$ has arbitrary large zeroes.} {\bf Remark 3.1.} {\it Some explicit oscillatory criteria for the system (3.1) are proved in [10] amd [14]}. {\bf 3.1. The case when $B(t)$ is a diagonal matrix}. In this subsection we will assume that $B(t) = diag\{b_1(t), b_2(t)\}$. Denote: $$ \chi_j(t) \equiv \sist{c_{jj}(t) \phantom{a} if b_{3-j}(t) = 0;}{c_{jj}(t) + \frac{|a_{3-j,j}(t)|^2}{b_{3-j}(t)}, \phantom{a} if b_{3-j}(t) \ne 0,} \phantom{a}h t\ge t_0, \phantom{a} j=1,2. $$ {\bf Theorem 3.1.} {\it Assume $b_j(t) \ge 0, \phantom{a} t\ge t_0,$ and if $b_j(t) = 0$ then $a_{3-j,j}(t) = 0, \phantom{a} j=1,2, \phantom{a} t\ge t_0.$ Under these restrictions the system (1.1) is oscillatory provided one of the systems $$ \sist{\phantom{a}i_1'= 2 Re (a_{jj}(t)) \phantom{a}i_1 + b_j(t) \psi_1;}{\psi_1' = - \chi_j(t) \phantom{a}i_1, \phantom{a}h t\ge t_0,} \eqno (3.4_j) $$ j=1,2, is oscillatory. } Proof. Suppose the system (1.1) is not oscillatory. Then for some conjoined solution $(\Phi(t), \Psi(t))$ of the system (1.2) there exists $t_1 \ge t_0$ such that $det \Phi(t) \ne 0, \phantom{a} t\ge t_1.$ Due to (2.4) from here it follows that $Z(t)\equiv \Psi(t) \Phi^{-1}(t), \phantom{a} t\ge t_1,$ is a Hermitian solution to Eq. (2.3) on $[t_1; +\infty)$. Let $Z(t) = \begin{pmatrix}z_{11}(t) & z_{12}(t)\\ \overline{z}_{12}(t) & z_{22}(t)\end{pmatrix}, \phantom{a} t\ge t_1.$ Consider the Riccati equations $$ y' + b_1(t) y^2 + 2 (Re a_{11}(t)) y + b_2(t)|z_{12}(t)|^2 + a_{21}(t) z_{12}(t) + \overline{a}_{21}(t) \overline{z}_{12}(t) - c_{11}(t) = 0, \eqno (3.5) $$ $$ y' + b_2(t) y^2 + 2 (Re a_{22}(t)) y + b_1(t)|z_{12}(t)|^2 + \overline{a}_{12}(t) z_{12}(t) + a_{12}(t) \overline{z}_{12}(t) - c_{22}(t) = 0, \eqno (3.6) $$ $$ y' + b_j(t) y^2 + 2 (Re a_{jj}(t) y + \chi_j(t) = 0, \eqno (3.7_j) $$ $j=1,2, \phantom{a} t\ge t_1.$ By (2.6) and (2.9) from the conditions of the theorem it follows that $$ \chi_1(t) \le b_2(t)|z_{12}(t)|^2 + a_{21}(t) z_{12}(t) + \overline{a}_{21}(t) \overline{z}_{12}(t) - c_{11}(t), \phantom{a} t\ge t_1, $$ $$ \chi_2(t) \le b_1(t)|z_{12}(t)|^2 + \overline{a}_{12}(t) z_{12}(t) + a_{12}(t) \overline{z}_{12}(t) - c_{22}(t), \phantom{a} t\ge t_1. $$ Using Theorem 2.1 to the pairs (3.5), $(3.7_1)$ and (3.6), $(3.7_2)$ of equations from here we conclude that the equations $(3.7_j), \phantom{a} j=1,2,$ have solutions on $[t_1; +\infty)$. By (3.1) - (3.3) from here it follows that the systems $(3.4_j), \phantom{a} j=1,2,$ are not oscillatory which contradicts the condition of the theorem. The obtained contradiction completes the proof of the theorem. Denote: $ I_j(\xi;t) \equiv \il{\xi}{t}\exp\biggl\{-\il{\tau}{t}2(Re a_{jj}(s))d s \biggr\}\chi_j(\tau) d\tau, \phantom{a} t\ge \xi \ge t_0, \phantom{a} j=1,2. $ {\bf Theorem 3.2.} {\it Assume $b_1(t) \ge 0 (\le 0), \phantom{a} b_2(t) \le 0 (\ge 0)$ and if $b_j(t) = 0$ then $a_{j, 3-j}(t) = 0, \phantom{a} j=1,2, \phantom{a} t\ge t_0$; there exist infinitely large sequences $\xi_{j,0} = t_) < \xi_{j,1} < ... < \xi_{j,m} , ..., \phantom{a} j=1,2, $ such that $$ 1_j) \phantom{a} (-1)^j \il{\xi_{j,m}}{t} \exp\biggl\{\il{\xi_{j,m}}{\tau}\biggl[2 Re a_{jj}(s) - (-1)^j I_j(xi_{j,m},s)\biggr] d s\biggr\} \chi_j(\tau) d\tau \ge 0 \phantom{a} (\le 0), $$ $\phantom{a} t\in [\xi_{j,m}; \xi_{j,m +1}), \phantom{a} m=1,2,3, ....., \phantom{a} j=1,2$. Then the system (1.1) is non oscillatory.}
3,247
26,753
en
train
0.65.3
{\bf Theorem 3.1.} {\it Assume $b_j(t) \ge 0, \phantom{a} t\ge t_0,$ and if $b_j(t) = 0$ then $a_{3-j,j}(t) = 0, \phantom{a} j=1,2, \phantom{a} t\ge t_0.$ Under these restrictions the system (1.1) is oscillatory provided one of the systems $$ \sist{\phantom{a}i_1'= 2 Re (a_{jj}(t)) \phantom{a}i_1 + b_j(t) \psi_1;}{\psi_1' = - \chi_j(t) \phantom{a}i_1, \phantom{a}h t\ge t_0,} \eqno (3.4_j) $$ j=1,2, is oscillatory. } Proof. Suppose the system (1.1) is not oscillatory. Then for some conjoined solution $(\Phi(t), \Psi(t))$ of the system (1.2) there exists $t_1 \ge t_0$ such that $det \Phi(t) \ne 0, \phantom{a} t\ge t_1.$ Due to (2.4) from here it follows that $Z(t)\equiv \Psi(t) \Phi^{-1}(t), \phantom{a} t\ge t_1,$ is a Hermitian solution to Eq. (2.3) on $[t_1; +\infty)$. Let $Z(t) = \begin{pmatrix}z_{11}(t) & z_{12}(t)\\ \overline{z}_{12}(t) & z_{22}(t)\end{pmatrix}, \phantom{a} t\ge t_1.$ Consider the Riccati equations $$ y' + b_1(t) y^2 + 2 (Re a_{11}(t)) y + b_2(t)|z_{12}(t)|^2 + a_{21}(t) z_{12}(t) + \overline{a}_{21}(t) \overline{z}_{12}(t) - c_{11}(t) = 0, \eqno (3.5) $$ $$ y' + b_2(t) y^2 + 2 (Re a_{22}(t)) y + b_1(t)|z_{12}(t)|^2 + \overline{a}_{12}(t) z_{12}(t) + a_{12}(t) \overline{z}_{12}(t) - c_{22}(t) = 0, \eqno (3.6) $$ $$ y' + b_j(t) y^2 + 2 (Re a_{jj}(t) y + \chi_j(t) = 0, \eqno (3.7_j) $$ $j=1,2, \phantom{a} t\ge t_1.$ By (2.6) and (2.9) from the conditions of the theorem it follows that $$ \chi_1(t) \le b_2(t)|z_{12}(t)|^2 + a_{21}(t) z_{12}(t) + \overline{a}_{21}(t) \overline{z}_{12}(t) - c_{11}(t), \phantom{a} t\ge t_1, $$ $$ \chi_2(t) \le b_1(t)|z_{12}(t)|^2 + \overline{a}_{12}(t) z_{12}(t) + a_{12}(t) \overline{z}_{12}(t) - c_{22}(t), \phantom{a} t\ge t_1. $$ Using Theorem 2.1 to the pairs (3.5), $(3.7_1)$ and (3.6), $(3.7_2)$ of equations from here we conclude that the equations $(3.7_j), \phantom{a} j=1,2,$ have solutions on $[t_1; +\infty)$. By (3.1) - (3.3) from here it follows that the systems $(3.4_j), \phantom{a} j=1,2,$ are not oscillatory which contradicts the condition of the theorem. The obtained contradiction completes the proof of the theorem. Denote: $ I_j(\xi;t) \equiv \il{\xi}{t}\exp\biggl\{-\il{\tau}{t}2(Re a_{jj}(s))d s \biggr\}\chi_j(\tau) d\tau, \phantom{a} t\ge \xi \ge t_0, \phantom{a} j=1,2. $ {\bf Theorem 3.2.} {\it Assume $b_1(t) \ge 0 (\le 0), \phantom{a} b_2(t) \le 0 (\ge 0)$ and if $b_j(t) = 0$ then $a_{j, 3-j}(t) = 0, \phantom{a} j=1,2, \phantom{a} t\ge t_0$; there exist infinitely large sequences $\xi_{j,0} = t_) < \xi_{j,1} < ... < \xi_{j,m} , ..., \phantom{a} j=1,2, $ such that $$ 1_j) \phantom{a} (-1)^j \il{\xi_{j,m}}{t} \exp\biggl\{\il{\xi_{j,m}}{\tau}\biggl[2 Re a_{jj}(s) - (-1)^j I_j(xi_{j,m},s)\biggr] d s\biggr\} \chi_j(\tau) d\tau \ge 0 \phantom{a} (\le 0), $$ $\phantom{a} t\in [\xi_{j,m}; \xi_{j,m +1}), \phantom{a} m=1,2,3, ....., \phantom{a} j=1,2$. Then the system (1.1) is non oscillatory.} Proof. Let us prove the theorem only in the case when $b_1(t) \ge 0, \phantom{a} b_2(t) \le 0, \phantom{a} t\ge~ t_0$. The case $b_1(t) \le 0, \phantom{a} b_2(t) \ge 0, \phantom{a} t\ge t_0$, can be proved by analogy. Let $(\Phi(t), \Psi(t))$ be a conjoined solution of the system (1.2) with $\Phi(t_0) = \begin{pmatrix}1 & 0\\ 0 & -1\end{pmatrix}$ and let $[t_0; T)$ be the maximum interval such that $det \Phi(t) \ne 0, \phantom{a} t\in [t_0; T)$. Then by (2.4) the matrix function $Z(t) \equiv \Psi(t) \phantom{a}i^{-1}(t), \phantom{a} t\in [t_0; T)$, is a Hermitian solution to Eq. (2.3) on $[t_0; T)$. By (2.5), (2.7), (2.8), (2.10), (2.11) from here it follows that the subsystems (2.8) and (2.11) have solutions $(z_{11}(t), y(t))$ and $(z_{22}(t), v(t))$ respectively on $[t_0; T)$ with $z_{11}(t_0) = 1, \linebreak z_{22}(t_0) =-1$. Show that $$ z_{11}(t) \ge 0, \phantom{a}h t\in [t_0; T). \eqno (3.8) $$ Consider the Riccati equations $$ z' + b_1(t) z^2 + 2 (Re a_{11}(t)) z + b_2(t)|y(t)|^2 + \chi_1(t) = 0, \phantom{a}h t\in [t_0;T), \eqno (3.9) $$ $$ z' + b_1(t) z^2 + 2 (Re a_{11}(t)) z + \chi_1(t) = 0, \phantom{a}h t\in [t_0;T), \eqno (3.10) $$ By Theorem 2.2 from the conditions of the theorem it follows that the last equation has a nonnegative solution on $[t_0; T).$ Then using Theorem 2.1 to the pair of equations (3.9), (3.10) on the basis of the conditions of the theorem we conclude that Eq. (3.9) has a nonnegative solution $z_0(t)$ on $[t_0; T)$ with $z_0(t_0) = 0$. Then since $z_{11}(t)$ is a solution to Eq. (3.9) on $[t_0;T)$ and $z_{11}(t_0) =1$ we have (3.8). Show that $$ z_{22}(t) \le 0, \phantom{a}h t\in [t_0; T). \eqno (3.11) $$ Consider the Riccati equations $$ z' - b_2(t) z^2 + 2 (Re a_{22}(t)) z - \chi_2(t) = 0, \phantom{a}h t\in [t_0;T), \eqno (3.12) $$ $$ z' - b_2(t) z^2 + 2 (Re a_{22}(t)) z - b_1(t)|v(t)|^2 - \chi_2(t) = 0, \phantom{a}h t\in [t_0;T). \eqno (3.13) $$ By Theorem 2.2 from the conditions of the theorem it follows that Eq. (3.12) has a nonnegative solution $z_1(t)$ on $[t_0; T)$ with $z_1(t_0) = 0$. Then using Theorem 2.1 to the pair of equations (3.12) and (3.13) we derive that Eq. (3.13) has a nonnegative solution $z_2(t)$ on $[t_0; T)$ whit $z_2(t_0) = 0$. Hence since obviously $-z_{22}(t)$ is a solution of Eq. (3.13) on $[t_0; T)$ and $-z_{11}(t_0) =1$ we have (3.11). Since $b_1(t) \ge 0, \phantom{a} b_2(t) \le 0, \phantom{a} t\in [t_0;T)$ from (3.8) and (3.11) it follows: $$ \il{t_0}{t}\Bigl[b_1(\tau) z_{11}(\tau) + b_2(\tau) z_{22}(\tau)\Bigr]d\tau \ge 0, \phantom{a}h t\in [t_0; T). \eqno (3.14) $$ To complete the proof of the theorem it remains to show that $T = + \infty$. Suppose $T < + \infty$. Then by virtue of Lemma 2.1 from (3.14) it follows that $[t_0; T)$ is not the maximum existence interval for $Z(t)$. By (2.4) from here it follows that $det \Phi(t) \ne 0, \phantom{a} t\in [t_0; T_1),$ for some $T_1 > T$. We have obtained a contradiction which completes the proof of the theorem. {\bf Remark 3.2.} {\it The conditions $1_j), \phantom{a} j=1,2,$ are satisfied if in particular $(-1)^j\chi_j(t) \ge~ 0 \linebreak (\le 0), \phantom{a} t \ge t_0$.} Denote: $$ \chi_3(t) \equiv b_2(t)\biggl[\mathfrak{M}(t) + \il{t_0}{t}\biggl|\exp\biggl\{- \il{\tau}{t}\bigl[\overline{a}_{11}(s) + a_{22}(s)\bigr]d s\biggr\}\times \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} $$ $$ \phantom{a}antom{aa}\times \biggl[\biggl(\frac{\overline{a}_{21}(t)}{b_2(t)}\biggl)' + \frac{\overline{a}_{21}(\tau)}{b_2(\tau)}(\overline{a}_{11}(\tau) + a_{22}(\tau)\biggr) + c_{12}(\tau)\biggr]\biggr|d\tau\biggr]^2 - \frac{|a_{21}(t)|^2}{b_2(t)} - c_{11}(t), $$ $$ \chi_4(t) \equiv b_1(t)\biggl[\mathfrak{M}(t) + \il{t_0}{t}\biggl|\exp\biggl\{- \il{\tau}{t}\bigl[\overline{a}_{11}(s) + a_{22}(s)\bigr]d s\biggr\}\times \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} $$ $$ \phantom{a}antom{aa}\times \biggl[\biggl(\frac{a_{12}(t)}{b_1(t)}\biggl)' + \frac{a_{12}(\tau)}{b_1(\tau)}(\overline{a}_{11}(\tau) + a_{22}(\tau)\biggr) + c_{12}(\tau)\biggr]\biggr|d\tau\biggr]^2 - \frac{|a_{12}(t)|^2}{b_1(t)} - c_{22}(t), $$ $$ I_{j+2}(\xi;t) \equiv \il{\xi}{t}\exp\biggl\{-\il{\tau}{t}2(Re a_{jj}(s))d s \biggr\}\chi_{j+2}(\tau) d\tau, \phantom{a} t\ge \xi \ge t_0, \phantom{a} j=1,2. $$ {\bf Theorem 3.3.} {\it Let the following conditions be satisfied \noindent 1) $b_j(t)> 0, \phantom{a} t\ge t_0, \phantom{a} j=1,2;$ \noindent 2) the functions $a_{12}(t)/b_1(t)$ and $\overline{a}_{21}(t)/b_2(t)$ are continuously differentiable on $[t_0;+\infty)$; \noindent 3) there exist infinitely large sequences $\xi_{j,0} = t_0 < \xi_{j,1} < ... < \xi_{j,m} , ..., \phantom{a} j=1,2, $ such that $$ \il{\xi_{j,m}}{t} \exp\biggl\{\il{\xi_{j,m}}{\tau}\biggl[2 Re a_{jj}(s) - I_{j+2}(\xi_{j,m},s)\biggr] d s\biggr\} \chi_{j+2}(\tau) d\tau \le 0, \phantom{a} t\in [\xi_{j,m}; \xi_{j,m +1}), $$ $m=1,2,3, ....., \phantom{a} j=1,2$. Then the system (1.1) is non oscillatory.}
3,871
26,753
en
train
0.65.4
{\bf Remark 3.2.} {\it The conditions $1_j), \phantom{a} j=1,2,$ are satisfied if in particular $(-1)^j\chi_j(t) \ge~ 0 \linebreak (\le 0), \phantom{a} t \ge t_0$.} Denote: $$ \chi_3(t) \equiv b_2(t)\biggl[\mathfrak{M}(t) + \il{t_0}{t}\biggl|\exp\biggl\{- \il{\tau}{t}\bigl[\overline{a}_{11}(s) + a_{22}(s)\bigr]d s\biggr\}\times \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} $$ $$ \phantom{a}antom{aa}\times \biggl[\biggl(\frac{\overline{a}_{21}(t)}{b_2(t)}\biggl)' + \frac{\overline{a}_{21}(\tau)}{b_2(\tau)}(\overline{a}_{11}(\tau) + a_{22}(\tau)\biggr) + c_{12}(\tau)\biggr]\biggr|d\tau\biggr]^2 - \frac{|a_{21}(t)|^2}{b_2(t)} - c_{11}(t), $$ $$ \chi_4(t) \equiv b_1(t)\biggl[\mathfrak{M}(t) + \il{t_0}{t}\biggl|\exp\biggl\{- \il{\tau}{t}\bigl[\overline{a}_{11}(s) + a_{22}(s)\bigr]d s\biggr\}\times \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} $$ $$ \phantom{a}antom{aa}\times \biggl[\biggl(\frac{a_{12}(t)}{b_1(t)}\biggl)' + \frac{a_{12}(\tau)}{b_1(\tau)}(\overline{a}_{11}(\tau) + a_{22}(\tau)\biggr) + c_{12}(\tau)\biggr]\biggr|d\tau\biggr]^2 - \frac{|a_{12}(t)|^2}{b_1(t)} - c_{22}(t), $$ $$ I_{j+2}(\xi;t) \equiv \il{\xi}{t}\exp\biggl\{-\il{\tau}{t}2(Re a_{jj}(s))d s \biggr\}\chi_{j+2}(\tau) d\tau, \phantom{a} t\ge \xi \ge t_0, \phantom{a} j=1,2. $$ {\bf Theorem 3.3.} {\it Let the following conditions be satisfied \noindent 1) $b_j(t)> 0, \phantom{a} t\ge t_0, \phantom{a} j=1,2;$ \noindent 2) the functions $a_{12}(t)/b_1(t)$ and $\overline{a}_{21}(t)/b_2(t)$ are continuously differentiable on $[t_0;+\infty)$; \noindent 3) there exist infinitely large sequences $\xi_{j,0} = t_0 < \xi_{j,1} < ... < \xi_{j,m} , ..., \phantom{a} j=1,2, $ such that $$ \il{\xi_{j,m}}{t} \exp\biggl\{\il{\xi_{j,m}}{\tau}\biggl[2 Re a_{jj}(s) - I_{j+2}(\xi_{j,m},s)\biggr] d s\biggr\} \chi_{j+2}(\tau) d\tau \le 0, \phantom{a} t\in [\xi_{j,m}; \xi_{j,m +1}), $$ $m=1,2,3, ....., \phantom{a} j=1,2$. Then the system (1.1) is non oscillatory.} Proof. Let $Z(t) \equiv \begin{pmatrix}z_{11}(t) & z_{12}(t)\\\overline{z}_{12}(t) & z_{22}(t)\end{pmatrix}$ be the Hermitian solution of Eq. (2.3) on $[t_0; T)$ satisfying the initial condition $Z(t_0) = \begin{pmatrix}1 & 0\\0 & 1\end{pmatrix}$, where $[t_0;T)$ is the maximum existence interval for $Z(t)$. Due to (2.4) to prove the theorem it is enough to show that $$ T= + \infty. \eqno (3.15) $$ By (2.5), (2.7), (2.8), (2.10), (2.11) from the conditions 1) and 2) it follows that \linebreak $(z_{11}(t), z_{12}(t) + \overline{a}_{21}(t)/ b_2(t))$ and $(z_{22}(t), z_{12}(t) + a_{12}(t)/ b_1(t))$ are solutions of the subsystems (2.8) and (2.11) respectively on $[t_0; T)$. Show that $$ z_{jj}(t) > 0, \phantom{a}h t\in [t_0; T). \eqno (3.16) $$ Suppose it is not so. Then there exists $T_1 \in (t_0; T)$ such that $$ z_{11}(t)z_{22}(t) > 0, \phantom{a} t\in [t_0; T_1), \phantom{a} z_{11}(T_1)z_{22}(T_1) = 0. \eqno (3.17) $$ Without loss of generality we may take that $a_{12}(t_0) = a_{21}(t_0) = 0$. Then by virtue of Lemma 2.2 from (3.17) it follows that $$ \biggl|z_{12}(t) + \frac{\overline{a}_{21}(t)}{b_2(t)}\biggr| \le \mathfrak{M}(t) + \il{t_0}{t}\biggl|\exp\biggl\{ - \il{\tau}{t}\bigl(\overline{a}_{11}(s) + a_{22}(s)\biggr)d s\biggr\}\biggl[\biggl(\frac{\overline{a}_{21}(\tau)}{b_2(\tau)}\biggr)' + \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} $$ $$ \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} + \frac{\overline{a}_{21}(\tau)}{b_2(\tau)}\bigl(\overline{a}_{11}(\tau) + a_{22}(\tau)\bigr) - c_{12}(\tau)\biggr]\biggr|d\tau, $$ $$ \biggl|z_{12}(t) + \frac{a_{12}(t)}{b_1(t)}\biggr| \le \mathfrak{M}(t) + \il{t_0}{t}\biggl|\exp\biggl\{ - \il{\tau}{t}\bigl(\overline{a}_{11}(s) + a_{22}(s)\biggr)d s\biggr\}\biggl[\biggl(\frac{a_{12}(\tau)}{b_1(\tau)}\biggr)' + \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} $$ $$ \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaa} + \frac{a_{12}(\tau)}{b_1(\tau)}\bigl(\overline{a}_{11}(\tau) + a_{22}(\tau)\bigr) - c_{12}(\tau)\biggr]\biggr|d\tau, \phantom{a} t\in [t_0;T_1). $$ Hence $$ b_2(t)\biggl|z_{12}(t) + \frac{\overline{a}_{21}(t)}{b_2(t)}\biggr| - \frac{|a_{21}(t)|^2}{b_2(t)} - c_{11}(t) \le \chi_3(t), \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} $$ $$ \phantom{a}antom{aaaaaaaaaaaaaa}b_1(t)\biggl|z_{12}(t) + \frac{a_{12}(t)}{b_1(t)}\biggr|^2 - \frac{|a_{12}(t)|^2}{b_2(t)} - c_{22}(t) \le \chi_4(t), \phantom{a}h t\in [t_0;T_1), $$ By virtue of Theorem 2.1 and Theorem 2.2 from here and from the condition 3) it follows that the Riccati equations $$ z' + b_1(t) z^2 + 2(Re a_{11}(t)) z + b_2(t)\biggl|z_{12}(t) + \frac{\overline{a}_{21}(t)}{b_2(t)}\biggr| - \frac{|a_{21}(t)|^2}{b_2(t)} - c_{11}(t) = 0, \eqno (3.18) $$ $$ z' + b_2(t) z^2 + 2(Re a_{22}(t)) z + b_1(t)\biggl|z_{12}(t) + \frac{a_{12}(t)}{b_1(t)}\biggr|^2 - \frac{|a_{12}(t)|^2}{b_2(t)} - c_{22}(t) = 0, \eqno (3.19) $$ $ t\in [t_0; T_1),$ have nonnegative solutions $z_1(t)$ and $z_2(t)$ respectively on $[t_0; T_1)$ with $z_1(t_0) = z_2(t_0) = 0$. Obviously $z_{11}(t)$ and $z_{22}(t)$ are solutions of Eq. (3.18) and (3.19) respectively on $[t_0; T_1]$. Therefore since $z_{jj}(t_0) = 1 > z_j(t_0) = 0, \phantom{a} j=1,2$ due to uniqueness theorem $z_{jj}(t) > 0, \phantom{a} t\in [t_0;T_1], \phantom{a} j=1,2,$ which contradicts (3.17). The obtained contradiction proves (3.16). From (3.16) and 1) it follows that $$ \il{t_0}{t}\bigl[b_1(\tau) z_{11}(\tau) + b_2(\tau) z_{22}(\tau)\bigr] d \tau \ge 0, \phantom{a}h t\in [t_0; T). \eqno (3.20) $$ Suppose $T < + \infty$. Then by Lemma 2.1 from (3.20) it follows that $[t_0; T)$ is not the maximum existence interval for $Z(t)$ which contradicts our assumption. The obtained contradiction proves (3.15). The theorem is proved. {\bf Remark 3.3.} {\it The conditions 3) of Theorem 3.3 are satisfied if in particular $\chi_j(t) \le ~0, \linebreak t\ge t_0, \phantom{a} j=1,2.$} {\bf 3.2. The case when $B(t)$ is nonnegative definite}. In this subsection we will assume that $B(t)$ is nonnegative definite and $\sqrt{B(t)}$ is continuously differentiable on $[t_0;+ \infty)$. Consider the matrix equation $$ \sqrt{B(t)} X [A(t) \sqrt{B(t)} - \sqrt{B(t)}'] = A(t) \sqrt{B(t)} - \sqrt{B(t)}', \phantom{a}h t\ge t_0. \eqno (3.21) $$ Obviously this equation has always a solution on $[a;b] (\subset [t_0; + \infty))$ when $B(t) > 0, \linebreak t\in [a;b] \phantom{a} (X(t) = B^{-1}(t), \phantom{a} t\in [a;b])$. It may have also a solution on $[a;b]$ in some cases when $B(t) \ge 0, \phantom{a} t\in [a;b]$ (e.g., $A(t) = \begin{pmatrix} a_1(t) & a_2(t)\\ 0 & 0 \end{pmatrix}, \phantom{a} B(t) = \begin{pmatrix} b_1(t) & 0 \\ 0 & 0 \end{pmatrix}, \phantom{a} b_1(t) >~ 0, \linebreak t\in [a;b]).$ In this subsection we also will assume that Eq. (3.21) has always a solution on $[t_0; + \infty)$. Let $F(t)$ be a solution of Eq. (3.21) on $[t_0; + \infty)$. Denote: $$P(t) \equiv F(t) [A(t)\sqrt{B(t)} - \sqrt{B(t)}'] = (p_{jk}(t))_{j,k =1}^2, \eqno (3.22)$$ $\phantom{a} Q(t) \equiv \sqrt{B(t)} C(t) \sqrt{B(t)}, \phantom{a} (q_{jk}(t))_{j,k =1}^2, \phantom{a} \widetilde{\chi}_j(t) \equiv q_{jj}(t) + |p_{3 -j, j}(t)|^2, \phantom{a} j=~1,2, \phantom{a} t\ge t_0$. {\bf Corollary 3.1}. {\it The system (1.1) is oscillatory provided one of the equations $$ \phantom{a}i_1'' + 2 [Re p_{jj}(t)] \phantom{a}i_1' + \widetilde{\chi}_j(t) \phantom{a}i_1 = 0, \phantom{a} j =1,2, \phantom{a} t\ge t_0. \eqno (3.23_j) $$ is oscillatory.}
3,594
26,753
en
train
0.65.5
{\bf Remark 3.3.} {\it The conditions 3) of Theorem 3.3 are satisfied if in particular $\chi_j(t) \le ~0, \linebreak t\ge t_0, \phantom{a} j=1,2.$} {\bf 3.2. The case when $B(t)$ is nonnegative definite}. In this subsection we will assume that $B(t)$ is nonnegative definite and $\sqrt{B(t)}$ is continuously differentiable on $[t_0;+ \infty)$. Consider the matrix equation $$ \sqrt{B(t)} X [A(t) \sqrt{B(t)} - \sqrt{B(t)}'] = A(t) \sqrt{B(t)} - \sqrt{B(t)}', \phantom{a}h t\ge t_0. \eqno (3.21) $$ Obviously this equation has always a solution on $[a;b] (\subset [t_0; + \infty))$ when $B(t) > 0, \linebreak t\in [a;b] \phantom{a} (X(t) = B^{-1}(t), \phantom{a} t\in [a;b])$. It may have also a solution on $[a;b]$ in some cases when $B(t) \ge 0, \phantom{a} t\in [a;b]$ (e.g., $A(t) = \begin{pmatrix} a_1(t) & a_2(t)\\ 0 & 0 \end{pmatrix}, \phantom{a} B(t) = \begin{pmatrix} b_1(t) & 0 \\ 0 & 0 \end{pmatrix}, \phantom{a} b_1(t) >~ 0, \linebreak t\in [a;b]).$ In this subsection we also will assume that Eq. (3.21) has always a solution on $[t_0; + \infty)$. Let $F(t)$ be a solution of Eq. (3.21) on $[t_0; + \infty)$. Denote: $$P(t) \equiv F(t) [A(t)\sqrt{B(t)} - \sqrt{B(t)}'] = (p_{jk}(t))_{j,k =1}^2, \eqno (3.22)$$ $\phantom{a} Q(t) \equiv \sqrt{B(t)} C(t) \sqrt{B(t)}, \phantom{a} (q_{jk}(t))_{j,k =1}^2, \phantom{a} \widetilde{\chi}_j(t) \equiv q_{jj}(t) + |p_{3 -j, j}(t)|^2, \phantom{a} j=~1,2, \phantom{a} t\ge t_0$. {\bf Corollary 3.1}. {\it The system (1.1) is oscillatory provided one of the equations $$ \phantom{a}i_1'' + 2 [Re p_{jj}(t)] \phantom{a}i_1' + \widetilde{\chi}_j(t) \phantom{a}i_1 = 0, \phantom{a} j =1,2, \phantom{a} t\ge t_0. \eqno (3.23_j) $$ is oscillatory.} Proof. Multiply Eq. (2.3) at left and at right by $\sqrt{B(t)}$. Taking into account the equality $(\sqrt{B(t)}Z \sqrt{B(t)})' = \sqrt{B(t)}Z' \sqrt{B(t)} + \sqrt{B(t)}'Z \sqrt{B(t)} + \sqrt{B(t)}Z \sqrt{B(t)}' \phantom{a} t\ge t_0,$ we obtain $$ V' + V^2 + P^*(t) V + V P(t) - Q(t) = 0, \phantom{a} t\ge t_0, \eqno (3.24) $$ where $V \equiv \sqrt{B(t)}Z \sqrt{B(t)}$. To this equation corresponds the following matrix hamiltonian system $$ \sist{\Phi'= P(t)\Phi + \Psi;}{\Psi' = Q(t)\Phi - P^*(t)\Psi, \phantom{a}h t\ge t_0.} \eqno (3.25) $$ Suppose the system (1.1) is not oscillatory. Then by (2.4) Eq. (2.3) has a Hermitian solution $Z(t)$ on $[t_1; + \infty)$ for some $t_1 \ge t_0$. Therefore $V(t) \equiv \sqrt{B(t)} Z(t) \sqrt{B(t)}, \phantom{a} t\ge t_1,$ is a hermitian solution of Eq. (3.24) on $[t_1; + \infty)$ and hence the system (3.25) has a conjoined solution $(\Phi (t), \Psi(t))$ such that $det \Phi(t) \ne 0, \phantom{a} t\ge t_.$ It means that the hamiltonian system $$ \sist{\phantom{a}i'= P(t)\phantom{a}i + \psi;}{\psi' = Q(t)\phantom{a}i - P^*(t)\psi, \phantom{a}h t\ge t_0,} $$ is not oscillatory. By Theorem 3.1 from here it follows that the scalar systems $$ \sist{\phantom{a}i_1'= 2 Re p_{jj}(t)\phantom{a}i_1 + \psi_1;}{\psi_1' = - \widetilde{\chi}_j(t) \phantom{a}i_1, \phantom{a}h t\ge t_0,} $$ $j = 1,2,$ are not oscillatory. Therefore the corresponding equations $(3.23_j), \phantom{a} j=1,2,$ are not oscillatory, which contradicts the conditions of the corollary. The corollary is proved. Denote: $$ \widetilde{\mathfrak{M}}(t)\equiv \max\limits_{\tau\in [t_0;t]}\biggl|\exp\biggl\{-\il{\tau}{t}\bigl(\overline{p}_{11}(s) + p_{22}(s)\bigr)ds\biggr\}(p_{12}(\tau) - \overline{p}_{21}(\tau))\biggr|; $$ $$ \widetilde{\chi}_3(t) \equiv \biggl[\widetilde{\mathfrak{M}}(t) + \il{t_0}{t}\biggl|\exp\biggl\{- \il{\tau}{t}[\overline{p}_{11}(s) + p_{22}(s)]d s\biggr\}\times \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} $$ $$ \phantom{a}antom{aaaaaaaaaaaaaaa}\times \Bigl[\overline{p}_{21} \hskip 0.1pt'(t) + \overline{p}_{21}(\tau)(\overline{p}_{11}(\tau) + p_{22}(\tau)) + q_{12}(\tau)\Bigr]\biggr|d\tau\biggr]^2 - |p_{21}(t)|^2 - q_{11}(t); $$ $$ \widetilde{\chi}_4(t) \equiv \biggl[\widetilde{\mathfrak{M}}(t) + \il{t_0}{t}\biggl|\exp\biggl\{- \il{\tau}{t}\bigl[\overline{p}_{11}(s) + p_{22}(s)\bigr]d s\biggr\}\times \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} $$ $$ \phantom{a}antom{aaaaaa}\times \Bigl[p_{12}\hskip0.1pt'(t) + p_{12}(\tau)(\overline{p}_{11}(\tau) + p_{22}(\tau)) + q_{12}(\tau)\Bigr]\biggr|d\tau\biggr]^2 - |p_{12}(t)|^2 - q_{22}(t), \phantom{a} t\ge t_0; $$ $$ \widetilde{I}_{j+2}(\xi,t) \equiv \il{\xi}{t}\exp\biggl\{-\il{\tau}{t}2(Re\hskip 2pt p_{jj}(s))d s \biggr\}\widetilde{\chi}_{j+2}(\tau) d\tau, \phantom{a} t\ge \xi \ge t_0, \phantom{a} j=1,2. $$ {\bf Theorem 3.4.} {\it Let the following conditions be satisfied: \noindent $1') B(t) \ge 0, \phantom{a} t\ge t_0;$ \noindent $2')$ Eq. (3.21) has a solution $F(t)$ on $[t_0; + \infty)$ \noindent $3')$ the functions $p_{12}(t)$ and $p_{21}(t)$, defined by (3.22) are continuously differentiable on $[t_0; + \infty)$; \noindent 4') there exist infinitely large sequences $\xi_{j,0} = t_0 < \xi_{j,1} < ... < \xi_{j,m}, ...$ such that $$ \il{\xi_{j,m}}{t} \exp\biggl\{\il{\xi_{j,m}}{\tau}\biggl[2 Re \hskip 2pt a_{jj}(s) - \widetilde{I}_{j+2}(\xi_{j,m},s)\biggr] d s\biggr\} \widetilde{\chi}_{j+2}(\tau) d\tau \le 0, \phantom{a} t\in [\xi_{j,m}; \xi_{j,m +1}), $$ $ \phantom{a} m=1,2,3, ....., \phantom{a} j=1,2$. Then the system (1.1) is non oscillatory.} Proof. Let $Z(t) \equiv \begin{pmatrix} z_{11}(t) & z_{12}(t)\\ \overline{z}_{12}(t) & z_{22}(t)\end{pmatrix}$ be the Hermitian solution of Eq. (2.3) satisfying the initial condition $Z(t_0) = \begin{pmatrix} 1 & 0\\ 0 & 1\end{pmatrix}$, and let $[t_0;T)$ be the maximum existence interval for $Z(t)$. Then $V(t) \equiv \sqrt{B(t)}Z(t) \sqrt{B(t)}$ is a soluyion of Eq. (3.24) on $[t_0; T)$. Without loss of generality we may assume that $B(t_0) = \begin{pmatrix} 1 & 0\\ 0 & 1\end{pmatrix}$. Then $V(t_0) = \begin{pmatrix} 1 & 0\\ 0 & 1\end{pmatrix}$, and by analogy of the proof of Theorem 3.3 we can show that from the conditions of the theorem it follows that $$ \il{t_0}{t} tr V(\tau) d \tau \ge 0, \phantom{a}h t\in [t_0;T). \eqno (3.26) $$ By virtue of Lemma 2.3 we have: $tr V(t) = tr [B(t) Z(t)], \phantom{a} t\in [t_0;T)$. From here and from (3.26) it follows: $$ \il{t_0}{t} tr [B(\tau) Z(\tau)] d \tau \ge 0, \phantom{a}h t\in [t_0;T). \eqno (3.27) $$ To complete the proof of the theorem it remains to show that $T = + \infty$. Suppose $T < + \infty$. Then by virtue of Lemma 2.2 from (3.27) it follows that $[t_0;T)$ is not the maximum existence interval for $Z(t)$ which contradicts our assumption. The obtained contradiction shows that $T = + \infty$. The theorem is proved. Example 3.1. Consider the second order vector equation $$ \phantom{a}i'' + K(t)\phantom{a}i = 0, \phantom{a}h t\ge t_0, \eqno (3.28) $$ where $K(t) \equiv \begin{pmatrix} \mu(t) & 10 i\\ -10 i & - t^2\end{pmatrix}, \phantom{a} \mu(t) \equiv p_1 \sin (\lambda_1 t + \theta_1) + p_2 \sin (\lambda_2 t + \theta_2), \phantom{a} t\ge t_0, \phantom{a} p_j, \phantom{a} \lambda_j\ne~ 0, \linebreak \theta_j, \phantom{a} j=1,2,$ are some real constants such that $\lambda_1$ and $\lambda_2$ are rational independent. This equation is equivalent to the system (1.1) with $A(t)\equiv~ 0, \phantom{a} B(t) \equiv \begin{pmatrix}1 & 0\\0 & 1\end{pmatrix}, \phantom{a} C(t) =~ - K(t), \linebreak t\ge t_0$. Hence by Theorem 3.1 Eq. (3.28) is oscillatory provided is oscillatory the following scalar system $$ \sist{\phantom{a}i_1' = \psi_1;}{\psi_1' = - \mu(t) \phantom{a}i_1, \phantom{a} t\ge t_0.} $$ This system is equivalent to the second order scalar equation $$ \phantom{a}i_1'' + \mu(t) \phantom{a}i_1 = 0, \phantom{a}h t\ge t_0, $$ which is oscillatory (see [15]). Therefore Eq. (3.28) is oscillatory. It is not difficult to verify that the results of works [16 -20] are not applicable to Eq. (3.28).
3,512
26,753
en
train
0.65.6
{\bf Theorem 3.4.} {\it Let the following conditions be satisfied: \noindent $1') B(t) \ge 0, \phantom{a} t\ge t_0;$ \noindent $2')$ Eq. (3.21) has a solution $F(t)$ on $[t_0; + \infty)$ \noindent $3')$ the functions $p_{12}(t)$ and $p_{21}(t)$, defined by (3.22) are continuously differentiable on $[t_0; + \infty)$; \noindent 4') there exist infinitely large sequences $\xi_{j,0} = t_0 < \xi_{j,1} < ... < \xi_{j,m}, ...$ such that $$ \il{\xi_{j,m}}{t} \exp\biggl\{\il{\xi_{j,m}}{\tau}\biggl[2 Re \hskip 2pt a_{jj}(s) - \widetilde{I}_{j+2}(\xi_{j,m},s)\biggr] d s\biggr\} \widetilde{\chi}_{j+2}(\tau) d\tau \le 0, \phantom{a} t\in [\xi_{j,m}; \xi_{j,m +1}), $$ $ \phantom{a} m=1,2,3, ....., \phantom{a} j=1,2$. Then the system (1.1) is non oscillatory.} Proof. Let $Z(t) \equiv \begin{pmatrix} z_{11}(t) & z_{12}(t)\\ \overline{z}_{12}(t) & z_{22}(t)\end{pmatrix}$ be the Hermitian solution of Eq. (2.3) satisfying the initial condition $Z(t_0) = \begin{pmatrix} 1 & 0\\ 0 & 1\end{pmatrix}$, and let $[t_0;T)$ be the maximum existence interval for $Z(t)$. Then $V(t) \equiv \sqrt{B(t)}Z(t) \sqrt{B(t)}$ is a soluyion of Eq. (3.24) on $[t_0; T)$. Without loss of generality we may assume that $B(t_0) = \begin{pmatrix} 1 & 0\\ 0 & 1\end{pmatrix}$. Then $V(t_0) = \begin{pmatrix} 1 & 0\\ 0 & 1\end{pmatrix}$, and by analogy of the proof of Theorem 3.3 we can show that from the conditions of the theorem it follows that $$ \il{t_0}{t} tr V(\tau) d \tau \ge 0, \phantom{a}h t\in [t_0;T). \eqno (3.26) $$ By virtue of Lemma 2.3 we have: $tr V(t) = tr [B(t) Z(t)], \phantom{a} t\in [t_0;T)$. From here and from (3.26) it follows: $$ \il{t_0}{t} tr [B(\tau) Z(\tau)] d \tau \ge 0, \phantom{a}h t\in [t_0;T). \eqno (3.27) $$ To complete the proof of the theorem it remains to show that $T = + \infty$. Suppose $T < + \infty$. Then by virtue of Lemma 2.2 from (3.27) it follows that $[t_0;T)$ is not the maximum existence interval for $Z(t)$ which contradicts our assumption. The obtained contradiction shows that $T = + \infty$. The theorem is proved. Example 3.1. Consider the second order vector equation $$ \phantom{a}i'' + K(t)\phantom{a}i = 0, \phantom{a}h t\ge t_0, \eqno (3.28) $$ where $K(t) \equiv \begin{pmatrix} \mu(t) & 10 i\\ -10 i & - t^2\end{pmatrix}, \phantom{a} \mu(t) \equiv p_1 \sin (\lambda_1 t + \theta_1) + p_2 \sin (\lambda_2 t + \theta_2), \phantom{a} t\ge t_0, \phantom{a} p_j, \phantom{a} \lambda_j\ne~ 0, \linebreak \theta_j, \phantom{a} j=1,2,$ are some real constants such that $\lambda_1$ and $\lambda_2$ are rational independent. This equation is equivalent to the system (1.1) with $A(t)\equiv~ 0, \phantom{a} B(t) \equiv \begin{pmatrix}1 & 0\\0 & 1\end{pmatrix}, \phantom{a} C(t) =~ - K(t), \linebreak t\ge t_0$. Hence by Theorem 3.1 Eq. (3.28) is oscillatory provided is oscillatory the following scalar system $$ \sist{\phantom{a}i_1' = \psi_1;}{\psi_1' = - \mu(t) \phantom{a}i_1, \phantom{a} t\ge t_0.} $$ This system is equivalent to the second order scalar equation $$ \phantom{a}i_1'' + \mu(t) \phantom{a}i_1 = 0, \phantom{a}h t\ge t_0, $$ which is oscillatory (see [15]). Therefore Eq. (3.28) is oscillatory. It is not difficult to verify that the results of works [16 -20] are not applicable to Eq. (3.28). Example 3.2. Let $$ B(t) = \begin{pmatrix} 1 & 1\\ 1 & 1\end{pmatrix}, \phantom{a} t\ge t_0. \eqno (3,29) $$ Then $\sqrt{B(t)} = \frac{\sqrt{2}}{2} \begin{pmatrix} 1 & 1\\ 1 & 1\end{pmatrix}, \phantom{a} \sqrt{B(t)}' \equiv~ 0, \phantom{a} t\ge t_0,$ and $ F(t) = \sqrt{2}\begin{pmatrix} 1 & 0\\ 0 & 1\end{pmatrix}, \phantom{a} t\ge t_0,$ is a solution of Eq. (3.21), on $[t_0;+\infty)$, $$ P(t) = \begin{pmatrix} a_{11}(t) + a_{12}(t) & a_{11}(t) + a_{12}(t) \\ a_{21}(t) + a_{22}(t) & a_{21}(t) + a_{22}(t) \end{pmatrix}, \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} \eqno (3.30) $$ $$ \phantom{a}antom{aaaaaaaaaaaaaaaaaaaaa} Q(t) = (c_{11}(t) + 2 Re\hskip 2pt c_{12}(t) + c_{22}(t))B(t), \phantom{a}h t\ge t_0. \eqno (3.31) $$ Assume $$ a_{11}(t) + a_{12}(t) = a_{21}(t) + a_{22}(t) \equiv 0, \phantom{a}h t\ge t_0. \eqno (3.32) $$ Then taking into account (3.30) and (3.31) we have: $\widetilde{\chi}_1(t) = \widetilde{\chi}_2(t)= - c_{11}(t) - 2 Re\hskip2pt c_{12}(t) - \linebreak - c_{22}(t), \phantom{a} t\ge t_0.$ Therefore by Corollary 3.1 under the restrictions (3.29) and (3.32) the system (1.1) is oscillatory provided the scalar equation $$ \phantom{a}i_1''(t)- [c_{11}(t) + 2 Re \hskip2pt c_{12}(t) + c_{22}(t)] \phantom{a}i_1(t) = 0, \phantom{a}h t\ge t_0, $$ is oscillatory. Assume now: $$ a_{11}(t) + a_{12}(t) = a_{21}(t) + a_{22}(t) = \frac{\alpha}{t}, \phantom{a} c_{11}(t) + 2 Re \hskip2pt c_{12}(t) + c_{22}(t) = \frac{\alpha - \alpha^2}{t^2}, \eqno (3.33) $$ $0\le \alpha \le 1, \phantom{a} t\ge 1$. Then taking into account (3.30) and (3.31) it is not difficult to verify that $\widetilde{\chi}_3(t) = \widetilde{\chi}_4(t) = \frac{\alpha^2 - \alpha}{t^2} \le 0, \phantom{a} t\ge 1.$ Hence by Theorem 3.4 under the restrictions (3.29) and (3.33) the system (1.1) is non oscillatory. Let now we assume: \noindent $\alpha_1) \phantom{a} a_{11}(t) + a_{12}(t) = a_{21}(t) + a_{22}(t) > 0, \phantom{a} t\ge t_0;$ \noindent $\alpha_2) \phantom{a} a_{11}(t) + a_{12}(t)$ is increasing and continuously differentiable on $[t_0;+ \infty)$; \noindent $\alpha_3) \phantom{a} \frac{|(a_{11}(t) + a_{12}(t))' + c_{11}(t) + 2 Re \hskip2pt c_{12}(t) + c_{22}(t)|}{a_{11}(t) + a_{12}(t)} \le \lambda = const, \phantom{a} t\ge t_0.$ \noindent Then taking into account (3.30) and (3.31) it is not difficult to verify that $\widetilde{\chi}_3(t) \le \lambda - [c_{11}(t) + 2 Re \hskip2pt c_{12}(t) + c_{22}(t)], \phantom{a} \widetilde{\chi}_4(t) \le \lambda - [c_{11}(t) + 2 Re \hskip 2pt c_{12}(t) + c_{22}(t)], \phantom{a} t\ge t_0.$ Therefore by virtue of Theorem 3.4 under the restrictions (3.29) and $\alpha_1) - \alpha_3)$ the system (1.1) is non oscillatory. {\bf Remark 3.4.} {\it Since under the restriction (3.29) $det B(t) \equiv 0, \phantom{a} t\ge t_0,$ the results of works [1 -9] are not applicable to the system (1.1) with (3.29).} \vskip 20 pt \centerline{ \bf References} \vskip 20pt \noindent 1. L. Li, F. Meng and Z. Zheng, Oscillation Results Related to Integral Averaging Technique\linebreak \phantom{a}antom{a} for Linear Hamiltonian Systems, Dynamic Systems and Applications 18 (2009), \phantom{a} \linebreak \phantom{a}antom{a} pp. 725 - 736. \noindent 2. F. Meng and A. B. Mingarelli, Oscillation of Linear Hamiltonian Systems, Proc. Amer.\linebreak \phantom{a}antom{a} Math. Soc. Vol. 131, Num. 3, 2002, pp. 897 - 904. \noindent 3. Q. Yang, R. Mathsen and S. Zhu, Oscillation Theorems for Self-Adjoint Matrix \linebreak \phantom{a}antom{a} Hamiltonian Systems. J. Diff. Equ., 19 (2003), pp. 306 - 329. \noindent 4. Z. Zheng and S. Zhu, Hartman Type Oscillatory Criteria for Linear Matrix Hamiltonian \linebreak \phantom{a}antom{a} Systems. Dynamic Systems and Applications, 17 (2008), pp. 85 - 96. \noindent 5. Z. Zheng, Linear transformation and oscillation criteria for Hamiltonian systems. \linebreak \phantom{a}antom{a} J. Math. Anal. Appl., 332 (2007) 236 - 245. \noindent 6. I. S. Kumary and S. Umamaheswaram, Oscillation Criteria for Linear Matrix \linebreak \phantom{a}antom{a} Hamiltonian Systems, Journal of Differential Equations, 165, 174 - 198 (2000). \noindent 7. Sh. Chen, Z. Zheng, Oscillation Criteria of Yan Type for Linear Hamiltonian Systems, \linebreak \phantom{a}antom{a} Computers and Mathematics with Applications, 46 (2003), 855 - 862. \noindent 8. Y. G. Sun, New oscillation criteria for linear matrix Hamiltonian systems. J. Math. \linebreak \phantom{a}antom{a} Anal. Appl., 279 (2003) 651 - 658. \noindent 9. K. I. Al - Dosary, H. Kh. Abdullah and D. Husein. Short note on oscillation of matrix \linebreak \phantom{a}antom{a} hamiltonian systems. Yokohama Mathematical Journal, vol. 50, 2003. \noindent 10. G. A. Grigorian, Oscillatory and Non Oscillatory Criteria for the Systems of Two \linebreak \phantom{a}antom{aa} Linear First Order Two by Two Dimensional Matrix Ordinary Differential Equations. \linebreak \phantom{a}antom{aa} Archivum Mathematicum, Tomus 54 (2018), PP. 189 - 203. \noindent 11. G. A. Grigorian. On Two Comparison Tests for Second-Order Linear Ordinary\linebreak \phantom{a}antom{aa} Differential Equations (Russian) Differ. Uravn. 47 (2011), no. 9, 1225 - 1240; trans-\linebreak \phantom{a}antom{aa} lation in Differ. Equ. 47 (2011), no. 9 1237 - 1252, 34C10. \noindent 12. G. A. Grigorian, "Two Comparison Criteria for Scalar Riccati Equations with\linebreak \phantom{a}antom{aa} Applications". Russian Mathematics (Iz. VUZ), 56, No. 11, 17 - 30 (2012). \noindent 13. G. A. Grigorian, On the Stability of Systems of Two First - Order Linear Ordinary\linebreak \phantom{a}antom{aa} Differential Equations, Differ. Uravn., 2015, vol. 51, no. 3, pp. 283 - 292. \noindent 14. G. A. Grigorian. Oscillatory Criteria for the Systems of Two First - Order Linear\linebreak \phantom{a}antom{a} Ordinary Differential Equations. Rocky Mountain Journal of Mathematics, vol. 47,\linebreak \phantom{a}antom{a} Num. 5, 2017, pp. 1497 - 1524
4,050
26,753
en
train
0.65.7
{\bf Remark 3.4.} {\it Since under the restriction (3.29) $det B(t) \equiv 0, \phantom{a} t\ge t_0,$ the results of works [1 -9] are not applicable to the system (1.1) with (3.29).} \vskip 20 pt \centerline{ \bf References} \vskip 20pt \noindent 1. L. Li, F. Meng and Z. Zheng, Oscillation Results Related to Integral Averaging Technique\linebreak \phantom{a}antom{a} for Linear Hamiltonian Systems, Dynamic Systems and Applications 18 (2009), \phantom{a} \linebreak \phantom{a}antom{a} pp. 725 - 736. \noindent 2. F. Meng and A. B. Mingarelli, Oscillation of Linear Hamiltonian Systems, Proc. Amer.\linebreak \phantom{a}antom{a} Math. Soc. Vol. 131, Num. 3, 2002, pp. 897 - 904. \noindent 3. Q. Yang, R. Mathsen and S. Zhu, Oscillation Theorems for Self-Adjoint Matrix \linebreak \phantom{a}antom{a} Hamiltonian Systems. J. Diff. Equ., 19 (2003), pp. 306 - 329. \noindent 4. Z. Zheng and S. Zhu, Hartman Type Oscillatory Criteria for Linear Matrix Hamiltonian \linebreak \phantom{a}antom{a} Systems. Dynamic Systems and Applications, 17 (2008), pp. 85 - 96. \noindent 5. Z. Zheng, Linear transformation and oscillation criteria for Hamiltonian systems. \linebreak \phantom{a}antom{a} J. Math. Anal. Appl., 332 (2007) 236 - 245. \noindent 6. I. S. Kumary and S. Umamaheswaram, Oscillation Criteria for Linear Matrix \linebreak \phantom{a}antom{a} Hamiltonian Systems, Journal of Differential Equations, 165, 174 - 198 (2000). \noindent 7. Sh. Chen, Z. Zheng, Oscillation Criteria of Yan Type for Linear Hamiltonian Systems, \linebreak \phantom{a}antom{a} Computers and Mathematics with Applications, 46 (2003), 855 - 862. \noindent 8. Y. G. Sun, New oscillation criteria for linear matrix Hamiltonian systems. J. Math. \linebreak \phantom{a}antom{a} Anal. Appl., 279 (2003) 651 - 658. \noindent 9. K. I. Al - Dosary, H. Kh. Abdullah and D. Husein. Short note on oscillation of matrix \linebreak \phantom{a}antom{a} hamiltonian systems. Yokohama Mathematical Journal, vol. 50, 2003. \noindent 10. G. A. Grigorian, Oscillatory and Non Oscillatory Criteria for the Systems of Two \linebreak \phantom{a}antom{aa} Linear First Order Two by Two Dimensional Matrix Ordinary Differential Equations. \linebreak \phantom{a}antom{aa} Archivum Mathematicum, Tomus 54 (2018), PP. 189 - 203. \noindent 11. G. A. Grigorian. On Two Comparison Tests for Second-Order Linear Ordinary\linebreak \phantom{a}antom{aa} Differential Equations (Russian) Differ. Uravn. 47 (2011), no. 9, 1225 - 1240; trans-\linebreak \phantom{a}antom{aa} lation in Differ. Equ. 47 (2011), no. 9 1237 - 1252, 34C10. \noindent 12. G. A. Grigorian, "Two Comparison Criteria for Scalar Riccati Equations with\linebreak \phantom{a}antom{aa} Applications". Russian Mathematics (Iz. VUZ), 56, No. 11, 17 - 30 (2012). \noindent 13. G. A. Grigorian, On the Stability of Systems of Two First - Order Linear Ordinary\linebreak \phantom{a}antom{aa} Differential Equations, Differ. Uravn., 2015, vol. 51, no. 3, pp. 283 - 292. \noindent 14. G. A. Grigorian. Oscillatory Criteria for the Systems of Two First - Order Linear\linebreak \phantom{a}antom{a} Ordinary Differential Equations. Rocky Mountain Journal of Mathematics, vol. 47,\linebreak \phantom{a}antom{a} Num. 5, 2017, pp. 1497 - 1524 \noindent 15. G. A. Grigorian, On one Oscillatory Criterion for The Second Order Linear Ordinary \linebreak \phantom{a}antom{a} Differential Equations. Opuscula Math. 36, Num. 5 (2016), 589–601. \\ \phantom{a}antom{a} http://dx.doi.org/10.7494/OpMath.2016.36.5.589 \noindent 16. L. H. Erbe, Q. Kong and Sh. Ruan, Kamenev Type Theorems for Second Order Matrix\linebreak \phantom{a}antom{aa} Differential Systems. Proc. Amer. Math. Soc. Vol. 117, Num. 4, 1993, 957 - 962. \noindent 17. R. Byers, B. J. Harris and M. K. Kwong, Weighted Means and Oscillation Conditions\linebreak \phantom{a}antom{a} for Second Order Matrix Differential Equations. Journal of Differential Equations\linebreak \phantom{a}antom{a} 61, 164 - 177 (1986). \noindent 18. G. J. Butler, L. H. Erbe and A. B. Mingarelli, Riccati Techniques and Variational\linebreak \phantom{a}antom{aa} Principles in Oscillation Theory for Linear Systems, Trans. Amer. Math. Soc. Vol. 303,\linebreak \phantom{a}antom{aa} Num. 1, 1987, 263 - 282. \noindent 19. A. B. Mingarelli, On a Conjecture for Oscillation of Second Order Ordinary Differential\linebreak \phantom{a}antom{aa} Systems, Proc. Amer. Math. Soc., Vol. 82. Num. 4, 1981, 593 - 598. \noindent 20. Q. Wang, Oscillation Criteria for Second Order Matrix Differential Systems Proc.\linebreak \phantom{a}antom{aa} Amer. Math. Soc. Vol. 131, Num. 3, 2002, 897 - 904. \end{document}
1,941
26,753
en
train
0.66.0
\begin{document} \title{Generalized solutions for the Euler-Bernoulli model with Zener viscoelastic foundations and distributional forces\thanks{Supported by the Austrian Science Fund (FWF) START program Y237 on 'Nonlinear distributional geometry', and the Serbian Ministry of Science Project 144016} } \author{ G\"unther H\"ormann \footnote{Faculty of Mathematics, University of Vienna, Nordbergstr.\ 15, A-1090 Vienna, Austria, Electronic mail: [email protected]}\\ Sanja Konjik \footnote{Faculty of Sciences, Department of Mathematics and Informatics, University of Novi Sad, Trg D. Obradovi\'ca 4, 21000 Novi Sad, Serbia, Electronic mail: [email protected]}\\ Ljubica Oparnica \footnote{Faculty of Education, University of Novi Sad, Podgori\v cka 4, 25000 Sombor, Serbia, Electronic mail: [email protected]} } \date{} \maketitle \begin{abstract} We study the initial-boundary value problem for an Euler-Bernoulli beam model with discontinuous bending stiffness laying on a viscoelastic foundation and subjected to an axial force and an external load both of Dirac-type. The corresponding model equation is fourth order partial differential equation and involves discontinuous and distributional coefficients as well as a distributional right-hand side. Moreover the viscoelastic foundation is of Zener type and described by a fractional differential equation with respect to time. We show how functional analytic methods for abstract variational problems can be applied in combination with regularization techniques to prove existence and uniqueness of generalized solutions. \vskip5pt \noindent {\bf Mathematics Subject Classification (2010):} 35D30, 46F30, 35Q74, 26A33, 35A15 \vskip5pt \noindent {\bf Keywords:} generalized solutions, Colombeau generalized functions, fractional derivatives, functional analytic methods, energy estimates \end{abstract} \section{Introduction and preliminaries} \label{ssec:intro} We study existence and uniqueness of a generalized solution to the initial-boundary value problem \begin{align} & \ensuremath{\partial}^2_tu + Q(t,x,\ensuremath{\partial}_x)u + g = h, \label{eq:PDE} \\ & D_t^\alpha u + u = \theta\, D_t^\alpha g + g, \label{eq:FDE}\\ & u|_{t=0} = f_1, \quad \ensuremath{\partial}_t u|_{t=0} = f_2, \nonumber \tag{IC}\\ & u|_{x=0} =u|_{x=1}=0, \quad \ensuremath{\partial}_x u|_{x=0} = \ensuremath{\partial}_x u|_{x=1}=0, \nonumber \tag{BC} \end{align} where $Q$ is a differential operator of the form $$ Q u := \ensuremath{\partial}_x^2(c(x)\ensuremath{\partial}_x^2 u) + b(x,t)\ensuremath{\partial}_x^2 u, $$ $b,c,g,h,f_1$ and $f_2$ are generalized functions, $\theta$ a constant, $0<\theta<1$, and $D_t^\alpha$ denotes the left Riemann-Liouville fractional derivative of order $\alpha$ with respect to $t$. Problem (\ref{eq:PDE})-(\ref{eq:FDE}) is equivalent to \begin{equation} \label{eq:IntegroPDE} \ensuremath{\partial}^2_tu + Q(t,x,\ensuremath{\partial}_x)u + Lu = h, \end{equation} with $L$ being the (convolution) operator given by ($\ensuremath{{\cal L}}$ denoting the Laplace transform) \begin{equation} \label{eq:operator_L} Lu(x,t) = \ensuremath{{\cal L}}^{-1} \left(\frac{1+s^\alpha}{1+\theta s^\alpha}\right)(t) \ast_t u(x,t), \end{equation} with the same initial (IC) and boundary (BC) conditions (cf.\ Section \ref{sec:EBmodel}). The precise structure of the above problem is motivated by a model from mechanics describing the displacement of a beam under axial and transversal forces connected to the viscoelastic foundation, which we briefly discuss in Subsection \ref{ssec:motivation}. We then briefly introduce the theory of Colombeau generalized functions which forms the framework for our work. Similar problems involving distributional and generalized solutions to Euler-Bernoulli beam models have been studied in \cite{BiondiCaddemi, HoermannOparnica07, HoermannOparnica09, YavariSarkani, YavariSarkaniReddy}. The development of the theory in the paper is divided into two parts. In Section \ref{sec:abstract} we consider the initial-boundary value problem (\ref{eq:IntegroPDE})-(IC)-(BC) on the abstract level. We prove, in Theorem \ref{lemma:m-a}, an existence result for the abstract variational problem corresponding to (\ref{eq:IntegroPDE})-(IC)-(BC) and derive energy estimates (\ref{eq:EE}) which guarantee uniqueness and serve as a key tool in the analysis of Colombeau generalized solutions. In Section \ref{sec:EBmodel}, we first show equivalence of the system (\ref{eq:PDE})-(\ref{eq:FDE}) with the integro-differential equation (\ref{eq:IntegroPDE}), and apply the results from Section \ref{sec:abstract} to the original problem in establishing weak solutions, if the coefficients are in $L^\infty$. Afterwards we allow the coefficients to be more irregular, set up the problem and show existence and uniqueness of solutions in the space of generalized functions. \subsection{The Euler-Bernoulli beam with viscoelastic foundation} \label{ssec:motivation} Consider the Euler-Bernoulli beam positioned on the viscoelastic foundation (cf.\ \cite{Atanackovic-book} for mechanical background). One can write the differential equation of the transversal motion \begin{equation} \label{eq:mot-trans motion} \frac{\ensuremath{\partial}^2}{d x^2}\left(A(x)\frac{\ensuremath{\partial}^2u}{d x^2}\right) + P(t) \frac{\ensuremath{\partial}^2u}{\ensuremath{\partial} x^2} + R(x) \frac{\ensuremath{\partial}^2u}{\ensuremath{\partial} t^2} + g(x,t)= h(x,t), \qquad x\in [0,1],\, t > 0, \end{equation} where \begin{itemize} \item $A$ denotes the bending stiffness and is given by $A(x) = EI_1 + H(x-x_0)EI_2$. Here, the constant $E$ is the modulus of elasticity, $I_1$, $I_2$, $I_1\neq I_2$, are the moments of inertia that correspond to the two parts of the beam, and $H$ is the Heaviside jump function; \item $R$ denotes the line density (i.e., mass per length) of the material and is of the form $ R(x)= R_0 + H(x-x_0)(R_1-R_2)$; \item $P(t)$ is the axial force, and is assumed to be of the form $P(t)=P_0 + P_1\delta(t-t_1)$, $P_0,P_1>0$; \item $g=g(x,t)$ denotes the force terms coming from the foundation; \item $u=u(x,t)$ denotes the displacement; \item $h=h(x,t)$ is the prescribed external load (e.g. when describing moving load it is of the form $h(x,t)=H_0\delta(x-ct)$, $H_0$ and $c$ are constants). \end{itemize} Since the beam is connected to the viscoelastic foundation there is a constitutive equation describing relation between the force of foundation and the displacement of the beam. We use the Zener generalized model given by \begin{equation} \label{eq:mot-const eq} D_t^\alpha u(x,t) + u(x,t) = \theta\, D_t^\alpha g(x,t) + g(x,t), \end{equation} where $0<\theta<1$ and $D_t^\alpha$ denotes the left Riemann-Liouville fractional derivative of order $\alpha$ with respect to $t$, defined by $$ D_t^\alpha u (t) = \frac{1}{\Gamma (1-\alpha)} \frac{d}{dt} \int_0^t \frac{u(\tau)}{(t-\tau)^\alpha} \,d\tau. $$ System (\ref{eq:mot-trans motion})-(\ref{eq:mot-const eq}) is supplied with initial conditions $$ u(x,0) = f_1(x), \qquad \ensuremath{\partial}_t u (x,0) = f_2(x), $$ where $f_1$ and $f_2$ are the initial displacement and the initial velocity. If $f_1(x)=f_2(x)=0$ the only solution to (\ref{eq:mot-trans motion})-(\ref{eq:mot-const eq}) should be $u\equiv g\equiv 0$. Also, the beam is considered to be fixed at both ends, hence boundary conditions take the form $$ u(0,t) = u(1,t) = 0, \qquad \ensuremath{\partial}_x u (0,t)=\ensuremath{\partial}_x u (1,t) = 0. $$ By a change of variables $t\mapsto \tau$ via $t(\tau) = \sqrt{R(x)}\tau$ the problem (\ref{eq:mot-trans motion})-(\ref{eq:mot-const eq}) is transformed into the standard form given in (\ref{eq:PDE})-(\ref{eq:FDE}). The function $c$ in (\ref{eq:PDE}) equals $A$ and therefore is of Heaviside type, and the function $b$ is then given by $b(x,t)= P(R(x)t)$ and its regularity properties depend on the assumptions on $P$ and $R$. As we shall see in Section \ref{sec:EBmodel}, standard functional analytic techniques reach as far as the following: boundedness of $b$ together with sufficient (spatial Sobolev) regularity of the initial values $f_1, f_2$ ensure existence of a unique solution $u\in L^2(0,T; H^2_0((0,1)))$ (in fact $u\in L^2(0,T; H^2_0((0,1)))$) to (\ref{eq:IntegroPDE}) with (IC) and (BC). However, the prominent case $b = p_0 + p_1 \delta (t-t_1)$ is clearly not covered by such a result, so in order to allow for these stronger singularities one needs to go beyond distributional solutions. \subsection{Basic spaces of generalized functions} \label{ssec:Colombeau} We shall set up and solve Equation (\ref{eq:IntegroPDE}) subject to the initial and boundary conditions (IC) and (BC) in an appropriate space of Colombeau generalized functions on the domain $X_T := (0,1) \times (0,T)$ (with $T > 0$) as introduced in \cite{BO:92} and applied later on, e.g., also in \cite{GH:04,HoermannOparnica09}. As a few standard references for the general background concerning Colombeau algebras on arbitrary open subsets of $\mathbb R^d$ or on manifolds we may mention \cite{CCR, c1, book, MOBook}. We review the basic notions and facts about the kind of generalized functions we will employ below: we start with regularizing families $(u_{\varepsilon})_{\varepsilon\in (0,1]}$ of smooth functions $u_{\varepsilon}\in H^{\infty}(X_T)$ (space of smooth functions on $X_T$ all of whose derivatives belong to $L^2$). We will often write $(u_{\varepsilon})_{\varepsilon}$ to mean $(u_{\varepsilon})_{\varepsilon\in (0,1]}$. We consider the following subalgebras: {\it Moderate families}, denoted by $\ensuremath{{\cal E}}_{M,H^{\infty}(X_T)} $, are defined by the property $$ \forall\,\alpha\in\mathbb N_0^n, \exists\,p\geq 0: \|\ensuremath{\partial}^{\alpha}u_{\varepsilon}\|_{L^2(X_T)}= O(\varepsilon^{-p}), \quad \text{ as } \varepsilon\to 0. $$ {\it Null families}, denoted by $\ensuremath{{\cal N}}_{H^{\infty}(X_T)}$, are the families in $\ensuremath{{\cal E}}_{M,H^{\infty}(X_T)}$ satisfying $$ \forall\,q\geq 0: \|u_{\varepsilon}\|_{L^2(X_T)} = O(\varepsilon^q) \quad \text{ as } \varepsilon\to 0. $$ Thus moderateness requires $L^2$ estimates with at most polynomial divergence as $\varepsilon\to 0$, together with all derivatives, while null families vanish very rapidly as $\varepsilon \to 0$. We remark that null families in fact have all derivatives satisfy estimates of the same kind (cf. \cite[Proposition 3.4(ii)]{garetto_duals}). Thus null families form a differential ideal in the collection of moderate families and we may define the {\it Colombeau algebra} as the factor algebra $$ \ensuremath{{\cal G}}_{H^{\infty}(X_T)} = \ensuremath{{\cal E}}_{M, H^{\infty}(X_T)}/\ensuremath{{\cal N}}_{H^{\infty}(X_T)}. $$ A typical notation for the equivalence classes in $\ensuremath{{\cal G}}_{H^{\infty}(X_T)}$ with representative $(u_\varepsilon)_\varepsilon$ will be $[(u_\varepsilon)_\varepsilon]$. Finally, the algebra $\ensuremath{{\cal G}}_{H^{\infty}((0,1))}$ of generalized functions on the interval $(0,1)$ is defined similarly and every element can be considered to be a member of $\ensuremath{{\cal G}}_{H^{\infty}(X_T)}$ as well. We briefly recall a few technical remarks from \cite[Subsection 1.2]{HoermannOparnica09}: If $(u_{\varepsilon})_{\varepsilon}$ belongs to $\ensuremath{{\cal E}}_{M,H^{\infty}(X_T)}$ we have smoothness up to the boundary for every $u_\varepsilon$, i.e.\ $u_{\varepsilon}\in C^{\infty}([0,1] \times [0,T])$ (which follows from Sobolev space properties on the Lipschitz domain $X_T$; cf.\ \cite{AF:03}) and therefore the restriction $u |_{t=0}$ of a generalized function $u \in \ensuremath{{\cal G}}_{H^{\infty}(X_T)}$ to $t=0$ is well-defined by $u_{\varepsilon}(\cdot,0)\in \ensuremath{{\cal E}}_{M,H^{\infty}((0,1))}$.
3,948
31,103
en
train
0.66.1
By a change of variables $t\mapsto \tau$ via $t(\tau) = \sqrt{R(x)}\tau$ the problem (\ref{eq:mot-trans motion})-(\ref{eq:mot-const eq}) is transformed into the standard form given in (\ref{eq:PDE})-(\ref{eq:FDE}). The function $c$ in (\ref{eq:PDE}) equals $A$ and therefore is of Heaviside type, and the function $b$ is then given by $b(x,t)= P(R(x)t)$ and its regularity properties depend on the assumptions on $P$ and $R$. As we shall see in Section \ref{sec:EBmodel}, standard functional analytic techniques reach as far as the following: boundedness of $b$ together with sufficient (spatial Sobolev) regularity of the initial values $f_1, f_2$ ensure existence of a unique solution $u\in L^2(0,T; H^2_0((0,1)))$ (in fact $u\in L^2(0,T; H^2_0((0,1)))$) to (\ref{eq:IntegroPDE}) with (IC) and (BC). However, the prominent case $b = p_0 + p_1 \delta (t-t_1)$ is clearly not covered by such a result, so in order to allow for these stronger singularities one needs to go beyond distributional solutions. \subsection{Basic spaces of generalized functions} \label{ssec:Colombeau} We shall set up and solve Equation (\ref{eq:IntegroPDE}) subject to the initial and boundary conditions (IC) and (BC) in an appropriate space of Colombeau generalized functions on the domain $X_T := (0,1) \times (0,T)$ (with $T > 0$) as introduced in \cite{BO:92} and applied later on, e.g., also in \cite{GH:04,HoermannOparnica09}. As a few standard references for the general background concerning Colombeau algebras on arbitrary open subsets of $\mathbb R^d$ or on manifolds we may mention \cite{CCR, c1, book, MOBook}. We review the basic notions and facts about the kind of generalized functions we will employ below: we start with regularizing families $(u_{\varepsilon})_{\varepsilon\in (0,1]}$ of smooth functions $u_{\varepsilon}\in H^{\infty}(X_T)$ (space of smooth functions on $X_T$ all of whose derivatives belong to $L^2$). We will often write $(u_{\varepsilon})_{\varepsilon}$ to mean $(u_{\varepsilon})_{\varepsilon\in (0,1]}$. We consider the following subalgebras: {\it Moderate families}, denoted by $\ensuremath{{\cal E}}_{M,H^{\infty}(X_T)} $, are defined by the property $$ \forall\,\alpha\in\mathbb N_0^n, \exists\,p\geq 0: \|\ensuremath{\partial}^{\alpha}u_{\varepsilon}\|_{L^2(X_T)}= O(\varepsilon^{-p}), \quad \text{ as } \varepsilon\to 0. $$ {\it Null families}, denoted by $\ensuremath{{\cal N}}_{H^{\infty}(X_T)}$, are the families in $\ensuremath{{\cal E}}_{M,H^{\infty}(X_T)}$ satisfying $$ \forall\,q\geq 0: \|u_{\varepsilon}\|_{L^2(X_T)} = O(\varepsilon^q) \quad \text{ as } \varepsilon\to 0. $$ Thus moderateness requires $L^2$ estimates with at most polynomial divergence as $\varepsilon\to 0$, together with all derivatives, while null families vanish very rapidly as $\varepsilon \to 0$. We remark that null families in fact have all derivatives satisfy estimates of the same kind (cf. \cite[Proposition 3.4(ii)]{garetto_duals}). Thus null families form a differential ideal in the collection of moderate families and we may define the {\it Colombeau algebra} as the factor algebra $$ \ensuremath{{\cal G}}_{H^{\infty}(X_T)} = \ensuremath{{\cal E}}_{M, H^{\infty}(X_T)}/\ensuremath{{\cal N}}_{H^{\infty}(X_T)}. $$ A typical notation for the equivalence classes in $\ensuremath{{\cal G}}_{H^{\infty}(X_T)}$ with representative $(u_\varepsilon)_\varepsilon$ will be $[(u_\varepsilon)_\varepsilon]$. Finally, the algebra $\ensuremath{{\cal G}}_{H^{\infty}((0,1))}$ of generalized functions on the interval $(0,1)$ is defined similarly and every element can be considered to be a member of $\ensuremath{{\cal G}}_{H^{\infty}(X_T)}$ as well. We briefly recall a few technical remarks from \cite[Subsection 1.2]{HoermannOparnica09}: If $(u_{\varepsilon})_{\varepsilon}$ belongs to $\ensuremath{{\cal E}}_{M,H^{\infty}(X_T)}$ we have smoothness up to the boundary for every $u_\varepsilon$, i.e.\ $u_{\varepsilon}\in C^{\infty}([0,1] \times [0,T])$ (which follows from Sobolev space properties on the Lipschitz domain $X_T$; cf.\ \cite{AF:03}) and therefore the restriction $u |_{t=0}$ of a generalized function $u \in \ensuremath{{\cal G}}_{H^{\infty}(X_T)}$ to $t=0$ is well-defined by $u_{\varepsilon}(\cdot,0)\in \ensuremath{{\cal E}}_{M,H^{\infty}((0,1))}$. If $v \in \ensuremath{{\cal G}}_{H^{\infty}((0,1))}$ and in addition we have for some representative $(v_\varepsilon)_\varepsilon$ of $v$ that $v_\varepsilon \in H_0^{2}((0,1))$, then $v_\varepsilon(0) = v_\varepsilon(1) = 0$ and $\ensuremath{\partial}_x v_\varepsilon (0) = \ensuremath{\partial}_x v_\varepsilon(1) = 0$. In particular, $$ v(0) = v(1) = 0 \quad\text{and}\quad \ensuremath{\partial}_x v(0) = \ensuremath{\partial}_x v(1) = 0 $$ holds in the sense of generalized numbers. Note that $L^2$-estimates for parametrized families $u_\varepsilon \in H^\infty(X_T)$ always yield similar $L^\infty$-estimates concerning $\varepsilon$-asymptotics (since $H^\infty(X_T) \subset C^\infty(\overline{X_T}) \subset W^{\infty,\infty}(X_T)$). The space $H^{-\infty}(\mathbb R^d)$, i.e.\ distributions of finite order, is embedded (as a linear space) into $\ensuremath{{\cal G}}_{H^{\infty}(\mathbb R^d)}$ by convolution regularization (cf.\ \cite{BO:92}). This embedding renders $H^\infty(\mathbb R^d)$ a subalgebra of $\ensuremath{{\cal G}}_{H^{\infty}(\mathbb R^d)}$. Certain generalized functions possess distribution aspects, namely we call $u =[(u_{\varepsilon})_{\varepsilon}]\in \ensuremath{{\cal G}}_{H^{\infty}}$ {\it associated with the distribution} $w\in\ensuremath{{\cal D}}'$, notation $u \approx w$, if for some (hence any) representative $(u_{\varepsilon})_{\varepsilon}$ of $u$ we have $u_{\varepsilon}\to w$ in $\ensuremath{{\cal D}}'$, as $\varepsilon\to 0$. \section{Preparations: An abstract evolution problem in variational form and the convolution-type operator $L$} \label{sec:abstract} In this section we study an abstract background of equation (\ref{eq:IntegroPDE}) subject to the initial and boundary conditions (IC) and (BC) in terms of bilinear forms on arbitrary Hilbert spaces. First we shall repeat standard results and then extend them to a wider class of problems. We shall show existence of a unique solution, derive energy estimates, and analyze the particular form of the operator $L$ appearing in (\ref{eq:IntegroPDE}). Let $V$ and $H$ be two separable Hilbert spaces, where $V$ is densely embedded into $H$. We shall denote the norms in $V$ and $H$ by $\|\cdot\|_V$ and $\|\cdot\|_H$ respectively. If $V'$ denotes the dual of $V$, then $V \subset H \subset V'$ forms a Gelfand triple. In the sequel we shall also make use of the Hilbert spaces $E_V:=L^2(0,T;V)$ with the norm $\|u\|_{E_V}:=(\int_0^T \|u(t)\|_V^2\,dt)^{1/2}$, and $E_H:=L^2(0,T;H)$ with the norm $\|u\|_{E_H}:=(\int_0^T \|u(t)\|_H^2\,dt)^{1/2}$. Since, $\|v\|_H\leq \ensuremath{{\cal C}}\cdot\|v\|_V$, $v\in V$, (without loss of generality we may assume that $\ensuremath{{\cal C}}=1$), it follows that $\|u\|_{E_H}\leq \|u\|_{E_V}$, $u\in E_V$, and $E_V\subset E_H$. The bilinear forms we shall deal with will be of the following type: \begin{assumption} \label{Ass1} Let $a(t,\cdot,\cdot)$, $a_0(t,\cdot,\cdot)$ and $a_1(t,\cdot,\cdot)$, $t\in [0,T]$, be (parametrized) families of continuous bilinear forms on $V$ with $$ a(t,u,v)=a_0(t,u,v)+ a_1(t,u,v) \qquad \forall\, u, v \in V, $$ such that the 'principal part' $a_0$ and the remainder $a_1$ satisfy the following conditions: \begin{itemize} \item[(i)] $t \mapsto a_0(t,u,v)$ is continuously differentiable $[0,T] \to \mathbb R$, for all $u,v \in V$; \item[(ii)] $a_0$ is symmetric, i.e., $a_0(t,u,v)= a_0(t,v,u)$, for all $u,v\in V$; \item[(iii)] there exist real constants $\lambda,\mu >0$ such that \begin{equation} \label{eq:coercivity} a_0(t,u,u) \geq \mu \|u\|_V^2 - \lambda \|u\|_H^2, \qquad \forall\, u\in V,\, \forall\, t\in [0,T]; \end{equation} \item[(iv)] $t \mapsto a_1(t,u,v)$ is continuous $[0,T] \to \mathbb R$, for all $u,v\in V$; \item[(v)] there exists $C_1 \geq 0$ such that for all $t\in [0,T]$ and $u,v\in V$, $|a_1(t,u,v)|\leq C_1 \|u\|_V\, \|v\|_H$. \end{itemize} \end{assumption} It follows from condition (i) that there exist nonnegative constants $C_0$ and $C_0'$ such that for all $t\in [0,T]$ and $u,v\in V$, \begin{equation} \label{i_cons} |a_0(t,u,v)| \leq C_0 \|u\|_V\,\|v\|_V \quad \mbox{ and } \quad |a'_0(t,u,v)| \leq C_0' \|u\|_V \,\|v\|_V, \end{equation} where $a'_0(t,u,v):=\frac{d}{dt} a_0(t,u,v)$. It is shown in \cite[Ch.\ XVIII, p.\ 558, Th.\ 1]{DautrayLions-vol5} (see also \cite[Ch.\ III, Sec.\ 8]{LionsMagenes}) that the above conditions guarantee unique solvability of the abstract variational problem in the following sense: \begin{theorem} \label{th:avp} Let $a(t,\cdot,\cdot)$, $t\in[0,T]$, satisfy Assumption \ref{Ass1}. Let $u_0\in V$, $u_1\in H$ and $f\in E_H$. Then there exists a unique $u \in E_V$ satisfying the regularity conditions \begin{equation} \label{eq:avp-reg} u'= \frac{du}{dt}\in E_V \quad \mbox{ and } \quad u''=\frac{d^2u}{dt^2}\in L^2(0,T;V') \end{equation} (here time derivatives should be understood in distributional sense), and solving the abstract initial value problem \begin{align} &\dis{u''(t)}{v} + a(t,u(t),v)=\dis{f(t)}{v}, \qquad \forall\, v\in V, \, \mbox{ for a.e. } t \in (0,T) \label{eq:avp}\\ & u(0)=u_0,\qquad u'(0)=u_1. \label{eq:avp-ic} \end{align} (Note that (\ref{eq:avp-reg}) implies that $u \in C([0,T],V)$ and $u' \in C([0,T],V')$. Hence it makes sense to evaluate $u(0)\in V$ and $u'(0) \in V'$ and (\ref{eq:avp-ic}) claims that these equal $u_0$ and $u_1$, respectively.) \end{theorem} \begin{remark} \label{rem:distr-intrpr} The precise meaning of (\ref{eq:avp}) is the following: $\forall\, \varphi\in\ensuremath{{\cal D}}((0,T))$, $$ \dis{\dis{u''(t)}{v}}{\varphi}_{(\ensuremath{{\cal D}}',\ensuremath{{\cal D}})} + \dis{a(t,u(t),v)}{\varphi}_{(\ensuremath{{\cal D}}',\ensuremath{{\cal D}})} = \dis{\dis{f(t)}{v}}{\varphi}_{(\ensuremath{{\cal D}}',\ensuremath{{\cal D}})}, $$ or equivalently, $$ \int_0^T \dis{u(t)}{v}\varphi''(t)\,dt + \int_0^T a(t,u(t),v)\varphi(t)\,dt = \int_0^T \dis{f(t)}{v}\varphi(t)\,dt. $$ \end{remark} The proof of this theorem proceeds by showing that $u$ satisfies a priori (energy) estimates which immediately imply uniqueness of the solution, and then using the Galerkin approximation method to prove existence of a solution. An explicit form of the energy estimate for the abstract variational problem (\ref{eq:avp-reg})-(\ref{eq:avp-ic}) with precise dependence of all constants is derived in \cite[Prop.\ 1.3]{HoermannOparnica09} in the form \begin{equation} \label{eq:ee-ThmP1} \|u(t)\|^2_V + \|u'(t)\|^2_H \leq \left( D_T\|u_0\|_V^2 + \|u_1\|_H^2 + \int_0^t \|f(\tau)\|_H^2\,d\tau \right) \cdot e^{t\cdot F_T}, \end{equation} where $D_T:=\frac{C_0 + \lambda (1+ T)}{\min\{1,\mu\}}$ and $F_T := \max \{\frac{C_0' +C_1}{\min\{1,\mu\}}, \frac{C_1 +T+ 2)}{\min\{1,\mu\}}\}$.\\ \subsection{Existence of a solution to the abstract variational problem} \label{ssec:existence}
4,043
31,103
en
train
0.66.2
It follows from condition (i) that there exist nonnegative constants $C_0$ and $C_0'$ such that for all $t\in [0,T]$ and $u,v\in V$, \begin{equation} \label{i_cons} |a_0(t,u,v)| \leq C_0 \|u\|_V\,\|v\|_V \quad \mbox{ and } \quad |a'_0(t,u,v)| \leq C_0' \|u\|_V \,\|v\|_V, \end{equation} where $a'_0(t,u,v):=\frac{d}{dt} a_0(t,u,v)$. It is shown in \cite[Ch.\ XVIII, p.\ 558, Th.\ 1]{DautrayLions-vol5} (see also \cite[Ch.\ III, Sec.\ 8]{LionsMagenes}) that the above conditions guarantee unique solvability of the abstract variational problem in the following sense: \begin{theorem} \label{th:avp} Let $a(t,\cdot,\cdot)$, $t\in[0,T]$, satisfy Assumption \ref{Ass1}. Let $u_0\in V$, $u_1\in H$ and $f\in E_H$. Then there exists a unique $u \in E_V$ satisfying the regularity conditions \begin{equation} \label{eq:avp-reg} u'= \frac{du}{dt}\in E_V \quad \mbox{ and } \quad u''=\frac{d^2u}{dt^2}\in L^2(0,T;V') \end{equation} (here time derivatives should be understood in distributional sense), and solving the abstract initial value problem \begin{align} &\dis{u''(t)}{v} + a(t,u(t),v)=\dis{f(t)}{v}, \qquad \forall\, v\in V, \, \mbox{ for a.e. } t \in (0,T) \label{eq:avp}\\ & u(0)=u_0,\qquad u'(0)=u_1. \label{eq:avp-ic} \end{align} (Note that (\ref{eq:avp-reg}) implies that $u \in C([0,T],V)$ and $u' \in C([0,T],V')$. Hence it makes sense to evaluate $u(0)\in V$ and $u'(0) \in V'$ and (\ref{eq:avp-ic}) claims that these equal $u_0$ and $u_1$, respectively.) \end{theorem} \begin{remark} \label{rem:distr-intrpr} The precise meaning of (\ref{eq:avp}) is the following: $\forall\, \varphi\in\ensuremath{{\cal D}}((0,T))$, $$ \dis{\dis{u''(t)}{v}}{\varphi}_{(\ensuremath{{\cal D}}',\ensuremath{{\cal D}})} + \dis{a(t,u(t),v)}{\varphi}_{(\ensuremath{{\cal D}}',\ensuremath{{\cal D}})} = \dis{\dis{f(t)}{v}}{\varphi}_{(\ensuremath{{\cal D}}',\ensuremath{{\cal D}})}, $$ or equivalently, $$ \int_0^T \dis{u(t)}{v}\varphi''(t)\,dt + \int_0^T a(t,u(t),v)\varphi(t)\,dt = \int_0^T \dis{f(t)}{v}\varphi(t)\,dt. $$ \end{remark} The proof of this theorem proceeds by showing that $u$ satisfies a priori (energy) estimates which immediately imply uniqueness of the solution, and then using the Galerkin approximation method to prove existence of a solution. An explicit form of the energy estimate for the abstract variational problem (\ref{eq:avp-reg})-(\ref{eq:avp-ic}) with precise dependence of all constants is derived in \cite[Prop.\ 1.3]{HoermannOparnica09} in the form \begin{equation} \label{eq:ee-ThmP1} \|u(t)\|^2_V + \|u'(t)\|^2_H \leq \left( D_T\|u_0\|_V^2 + \|u_1\|_H^2 + \int_0^t \|f(\tau)\|_H^2\,d\tau \right) \cdot e^{t\cdot F_T}, \end{equation} where $D_T:=\frac{C_0 + \lambda (1+ T)}{\min\{1,\mu\}}$ and $F_T := \max \{\frac{C_0' +C_1}{\min\{1,\mu\}}, \frac{C_1 +T+ 2)}{\min\{1,\mu\}}\}$.\\ \subsection{Existence of a solution to the abstract variational problem} \label{ssec:existence} We shall now prove a similar result for a slightly modified abstract variational problem, which is to encompass our problem (\ref{eq:IntegroPDE}). Here in addition to the bilinear forms we shall consider "causal" operators $L:L^2(0,T_1;H)\to L^2(0,T_1;H)$, $\forall\,T_1<T$, which satisfy the estimate: $\exists\, C_L>0$ such that \begin{equation} \label{eq:L-estimate} \|Lu\|_{L^2(0,T_1;H)} \leq C_L \|u\|_{L^2(0,T_1;H)}, \end{equation} where $C_L$ is independent of $T_1$. \begin{lemma} \label{lemma:m-a} Let $a(t,\cdot,\cdot)$, $t\in[0,T]$, satisfy Assumption \ref{Ass1}. Let $f_1\in V$, $f_2\in H$ and $h\in E_H$. Let $L:E_H\to E_H$ satisfy (\ref{eq:L-estimate}). Then there exists a $u\in E_V$ satisfying the regularity conditions $$ u'=\frac{du}{dt} \in E_V \quad \mbox{ and } \quad u''=\frac{d^2 u}{dt^2} \in L^2(0,T;V') $$ and solving the abstract initial value problem \begin{align} &\dis{u''(t)}{v} + a(t,u(t),v) + \dis{Lu(t)}{v}= \dis{h(t)}{v}, \qquad \forall\, v\in V,\, \mbox{ for a.e. } t \in (0,T), \label{eq:avp-L}\\ & u(0)=f_1, \qquad u'(0) = f_2. \label{eq:avp-L-ic} \end{align} Moreover, we have $u\in\ensuremath{{\cal C}}([0,T];V)$ and $u'\in\ensuremath{{\cal C}}([0,T];H)$. \end{lemma} Here we give a proof based on an iterative procedure and employing Theorem \ref{th:avp} and the energy estimate (\ref{eq:ee-ThmP1}) in each step. Notice that the precise meaning of (\ref{eq:avp-L}) (in distributional sense) is explained in Remark \ref{rem:distr-intrpr}. \begin{proof} Let $u_0\in E_H$ be arbitrarily chosen and consider the initial value problem for $u$ in the sense of Remark \ref{rem:distr-intrpr} \begin{align} & \dis{u''(t)}{v}+ a(t,u(t),v)+ \dis{Lu_0(t)}{v}= \dis{h(t)}{v}, \qquad \forall\, v\in V,\, \mbox{ for a.e. } t \in (0,T), \label{eq:sol u1}\\ & u(0) = f_1,\quad u'(0) = f_2. \nonumber \end{align} By Theorem \ref{th:avp} there exists a unique $u_1\in E_V$ satisfying $u_1'\in E_V$, $u_1''\in L^2(0,T;V')$, and solving (\ref{eq:sol u1}). Consider now (\ref{eq:sol u1}) with $Lu_1$ instead of $Lu_0$. As above, by Theorem \ref{th:avp}, one obtains a unique solution $u_2\in E_V$ with $u_2'\in E_V$ and $u_2''\in L^2(0,T;V')$. Repeating this procedure we obtain a sequence of functions $\{u_k\}_{k\in\mathbb N}\in E_V$, satisfying $u_k'\in E_V$, $u_k''\in L^2(0,T;V')$, and solving the following problems: for each $k\in\mathbb N$, \begin{align*} & \dis{u_k''(t)}{v} + a(t,u_k(t),v) + \dis{Lu_{k-1}(t)}{v}= \dis{h(t)}{v}, \qquad \forall\, v\in V,\, \mbox{ for a.e. } t \in (0,T), \\ & u_k(0) = f_1,\quad u_k'(0) = f_2. \end{align*} Also, for all $k\in\mathbb N$, $u_k$ satisfies the energy estimate of type (\ref{eq:ee-ThmP1}): $$ \|u_k(t)\|_V^2 + \|u_k'(t)\|_H^2 \leq \left( D_T \,\|f_1\|_V^2 + \|f_2\|_H^2 + \int_0^t\|(h-Lu_{k-1})(\tau)\|_H^2 \,d\tau \right) \cdot e^{t\cdot F_T}, $$ where the constants $D_T$ and $F_T$ are independent of $k$. We claim that $\{u_k\}_{k\in\mathbb N}$ converges in $E_V$. To see this, we first note that $u_l-u_k$ solves \begin{align*} & \dis{(u_l-u_k)''(t)}{v} + a(t,(u_l-u_k)(t),v) + \dis{L(u_{l-1}-u_{k-1})(t)}{v}= 0, \quad \forall\,v\in V,\, \mbox{ for a.e. } t \in (0,T), \\ & (u_l-u_k)(0) = 0, \quad (u_l-u_k)'(0) = 0, \end{align*} and $u_l-u_k\in E_V$, with $u_l'-u_k'\in E_V$ and $u_l''-u_k''\in L^2(0,T;V')$. Moreover, the corresponding energy estimate is of the form \begin{equation} \label{EE1} \|(u_l-u_k)(t)\|_V^2 + \|(u_l-u_k)'(t)\|_H^2 \leq e^{t\cdot F_T} \cdot \int_0^t \|L(u_{l-1}-u_{k-1})(\tau)\|_H^2\,d\tau. \end{equation} Thus, $$ \|(u_l-u_k)(t)\|_V^2 \leq e^{T\cdot F_T} \cdot \int_0^T \|L(u_{l-1}-u_{k-1})(\tau)\|_H^2\,d\tau = e^{T\cdot F_T} \cdot \|L(u_{l-1}-u_{k-1})\|_{E_H}^2. $$ Integrating from $0$ to $T$ and using assumption (\ref{eq:L-estimate}) on $L$, one obtains \begin{equation} \label{uk-ul_estimates} \|u_l-u_k\|_{E_V} \leq \gamma_T \|u_{l-1}-u_{k-1}\|_{E_H}, \end{equation} where $\gamma_T:=C_L \sqrt{T} e^{\frac{T\cdot F_T}{2}}$. Taking now $l=k+1$ in (\ref{uk-ul_estimates}) successively, yields $$ \|u_{k+1}-u_k\|_{E_V} \leq \gamma_T^k \|u_{1}-u_{0}\|_{E_H} \leq \gamma_T^k \|u_{1}-u_{0}\|_{E_V}, $$ and hence $$ \|u_l-u_k\|_{E_V} \leq \|u_l-u_{l-1}\|_{E_V}+\ldots +\|u_{k+1}-u_{k}\|_{E_V} \leq \sum_{i=l-1}^{k} \gamma_T^i \|u_{1}-u_{0}\|_{E_V}. $$ We may choose $T_1<T$ such that $\gamma_{T_1}< 1$, hence $\sum_{i=0}^\infty \gamma_{T_1}^i$ converges. Note that $t\mapsto\gamma_t$ is increasing. By abuse of notation we denote $L^2(0,T_1;V)$ again by $E_V$. This further implies that $\{u_k\}_{k\in\mathbb N}$ is a Cauchy sequence and hence convergent in $E_V$, say $u:= \lim_{k\to\infty} u_k$. Similarly, one can show convergence of $u_k'$ in $E_V$, i.e., existence of $v:= \lim_{k\to\infty} u_k' \in E_V$. In the distributional setting $\lim_{k\to\infty} u_k' = u'$, and therefore $u'=v\in E_V$ (cf.\ \cite[Ch.\ XVIII, p.\ 473, Prop.\ 6]{DautrayLions-vol5}). We also have to show that $u$ solves equation (\ref{eq:avp-L}). Let $\varphi\in \ensuremath{{\cal D}}((0,T))$. Then \begin{align*} \dis{\dis{h(t)}{v}}{\varphi} & = \dis{\dis{u_k''(t)}{v}}{\varphi} + \dis{a(t,u_k(t),v)}{\varphi} + \dis{\dis{Lu_{k-1}(t)}{v}}{\varphi} \\ & = \dis{\dis{u_k}{v}}{\varphi''} + \dis{a(t,u_k(t),v)}{\varphi} + \dis{\dis{Lu_{k-1}(t)}{v}}{\varphi} \\ & \to \dis{\dis{u}{v}}{\varphi''} + \dis{a(t,u(t),v)}{\varphi} + \dis{\dis{Lu(t)}{v}}{\varphi} \\ & = \dis{\dis{u''(t)}{v}}{\varphi} + \dis{a(t,u(t),v)}{\varphi} + \dis{\dis{Lu(t)}{v}}{\varphi}. \end{align*} Here we used that $\varphi''\in \ensuremath{{\cal D}}((0,T))$. Therefore $u$ solves (\ref{eq:avp-L}) on the time interval $[0,T_1]$. The initial conditions are satisfied by construction of $u$. It remains to extend this result on existence of a solution to the whole interval $[0,T]$.
3,918
31,103
en
train
0.66.3
We may choose $T_1<T$ such that $\gamma_{T_1}< 1$, hence $\sum_{i=0}^\infty \gamma_{T_1}^i$ converges. Note that $t\mapsto\gamma_t$ is increasing. By abuse of notation we denote $L^2(0,T_1;V)$ again by $E_V$. This further implies that $\{u_k\}_{k\in\mathbb N}$ is a Cauchy sequence and hence convergent in $E_V$, say $u:= \lim_{k\to\infty} u_k$. Similarly, one can show convergence of $u_k'$ in $E_V$, i.e., existence of $v:= \lim_{k\to\infty} u_k' \in E_V$. In the distributional setting $\lim_{k\to\infty} u_k' = u'$, and therefore $u'=v\in E_V$ (cf.\ \cite[Ch.\ XVIII, p.\ 473, Prop.\ 6]{DautrayLions-vol5}). We also have to show that $u$ solves equation (\ref{eq:avp-L}). Let $\varphi\in \ensuremath{{\cal D}}((0,T))$. Then \begin{align*} \dis{\dis{h(t)}{v}}{\varphi} & = \dis{\dis{u_k''(t)}{v}}{\varphi} + \dis{a(t,u_k(t),v)}{\varphi} + \dis{\dis{Lu_{k-1}(t)}{v}}{\varphi} \\ & = \dis{\dis{u_k}{v}}{\varphi''} + \dis{a(t,u_k(t),v)}{\varphi} + \dis{\dis{Lu_{k-1}(t)}{v}}{\varphi} \\ & \to \dis{\dis{u}{v}}{\varphi''} + \dis{a(t,u(t),v)}{\varphi} + \dis{\dis{Lu(t)}{v}}{\varphi} \\ & = \dis{\dis{u''(t)}{v}}{\varphi} + \dis{a(t,u(t),v)}{\varphi} + \dis{\dis{Lu(t)}{v}}{\varphi}. \end{align*} Here we used that $\varphi''\in \ensuremath{{\cal D}}((0,T))$. Therefore $u$ solves (\ref{eq:avp-L}) on the time interval $[0,T_1]$. The initial conditions are satisfied by construction of $u$. It remains to extend this result on existence of a solution to the whole interval $[0,T]$. Since $T_1$ is independent on the initial conditions, if $T>T_1$ one needs at most $\frac{T}{T_1}$ steps to reach convergence in $E_V$. In fact, one has to show regularity at the end point $T_1$ of the interval $[0,T_1]$ on which the solution exists, i.e., $$ u(T_1)\in V \quad \mbox{ and } \quad u'(T_1)\in H. $$ To see this, it suffices to show that $u_k\to u$ in $Y_V:=\ensuremath{{\cal C}}([0,T_1];V)$ and $u_k'\to u'$ in $Y_H:=\ensuremath{{\cal C}}([0,T_1];H)$. From (\ref{EE1}) and assumption (\ref{eq:L-estimate}) on $L$ we obtain $$ \|(u_l-u_k)(t)\|_V^2 \leq e^{T_1\cdot F_{T_1}} C_L^2 \int_0^{T_1} \|(u_l-u_k)(\tau)\|_H^2 \,d\tau \leq e^{T_1\cdot F_{T_1}} C_L^2 \int_0^{T_1} \|(u_l-u_k)(\tau)\|_V^2 \,d\tau. $$ Taking first the square root and then the supremum over all $t\in[0,T]$ yields $$ \|u_l-u_k\|_{Y_V} \leq \gamma_{T_1} \|u_l-u_k\|_{Y_V}. $$ Since $\gamma_{T_1}<1$ this implies that $\{u_k\}_{k\in\mathbb N}$ is a Cauchy sequence in $Y_V$. Similarly, $$ \|(u_l-u_k)'(t)\|_H^2 \leq e^{T_1\cdot F_{T_1}} C_L^2 \int_0^{T_1} \|(u_l-u_k)(\tau)\|_V^2 \,d\tau, $$ which upon taking the supremum gives $$ \|(u_l-u_k)'\|_{Y_H} \leq \gamma_{T_1} \|u_l-u_k\|_{Y_V}, $$ thus $u'_k\to u'$ in $Y_H$ (due to the already established convergence of $u_k$ in $Y_V$), and $u'(T_1)\in H$. This proves the claim. \end{proof}
1,266
31,103
en
train
0.66.4
\subsection{Energy estimates} \label{ssec:ee} In Section \ref{sec:EBmodel} we shall need a priori (energy) estimate for problem (\ref{eq:IntegroPDE}). In fact, for the verification of moderateness in the Colombeau setting it will be crucial to know all constants in the energy estimate precisely. Therefore, we shall now derive it. \begin{proposition} \label{prop:EnergyEstimates} Under the assumptions of Lemma \ref{lemma:m-a}, let $u$ be a solution to the abstract variational problem (\ref{eq:avp-L})-(\ref{eq:avp-L-ic}). Then, for each $t\in [0,T]$, \begin{equation}\label{eq:EE} \|u(t)\|^2_V + \|u'(t)\|^2_H \leq \left( D_T\|f_1\|_V^2 + \frac{1}{\nu} \left( \|f_2\|_H^2 + \int_0^{t} \|h(\tau)\|^2_H \,d\tau \right) \right) \cdot e^{t\cdot F_T}, \end{equation} where $\nu:=\min\{1,\mu\}$, $D_T:=\frac{C_0 + \lambda (1+ T)}{\nu}$ and $F_T := \max \{\frac{C_0' +C_1+ C_L}{\nu}, \frac{C_1 +2+ \lambda (1+ T)}{\nu}\}$. \end{proposition} \begin{proof} Setting $v:= u'(t)$ in (\ref{eq:avp-L}) we obtain (as an equality of integrable functions with respect to $t$) $$ \dis{u''(t)}{u'(t)} + a(t,u(t),u'(t)) + \dis{Lu(t)}{u'(t)} = \dis{h(t)}{u'(t)}. $$ Since $a(t,u,v)=a_0(t,u,v)+a_1(t,u,v)$ and $\dis{u''(t)}{u'(t)}= \frac{1}{2}\frac{d}{dt}\dis{u'(t)}{u'(t)} =\frac{1}{2}\frac{d}{dt}\|u'(t)\|^2_H$, we have $$ \frac{d}{dt}\|u'(t)\|^2_H = - 2 a_0(t,u(t),u'(t)) - 2a_1(t,u(t), u'(t)) - 2 \dis{Lu(t)}{u'(t)} + 2 \dis{h(t)}{u'(t)}. $$ Integration from $0$ to $t_1$, for arbitrary $0< t_1 \leq T$, gives \begin{eqnarray}s \|u'(t_1)\|^2_H - \|f_2\|^2_H &=& - 2 \int_0^{t_1} a_0(t,u(t),u'(t))\,dt -2 \int_0^{t_1} a_1(t,u(t), u'(t))\,dt \\ && \quad - 2 \int_0^{t_1}\dis{Lu(t)}{u'(t)}\,dt + 2 \int_0^{t_1} \dis{h(t)}{u'(t)}\,dt. \end{eqnarray}s Note that $\frac{d}{dt}a_0(t,u(t),u(t))= a_0'(t,u(t),u(t)) +a_0(t,u'(t),u(t))+a_0(t,u(t),u'(t))$ and hence, by Assumption \ref{Ass1} (ii), $2a_0(t,u'(t),u(t))=\frac{d}{dt}a_0(t,u(t),u(t))-a_0'(t,u(t),u(t))$. This yields \begin{align} & LHS := \|u'(t_1)\|^2_H + a_0(t_1, u(t_1), u(t_1)) = \|f_2\|^2_H + a_0(0,u(0),u(0)) - \int_0^{t_1} a_0'(t,u(t),u(t))\,dt \nonumber \\ & \qquad - 2\int_0^{t_1} a_1(t,u(t), u'(t)) \,dt - 2\int_0^{t_1} \dis{Lu(t)}{u'(t)}\,dt + 2\int_0^{t_1} \dis{h(t)}{u'(t)}\,dt = : RHS. \label{eq:srednja} \end{align} Further, by (\ref{i_cons}), Assumption \ref{Ass1} (v), the Cauchy-Schwartz inequality, the inequality $2ab\leq a^2+b^2$, and the assumption (\ref{eq:L-estimate}) on $L$ we have \begin {align*} |RHS| & \leq \|f_2\|^2_H + C_0\|u(0)\|^2_V + C_0' \int_0^{t_1} \|u(t)\|^2_V \,dt \\ & \quad + 2 C_1 \int_0^{t_1} \|u(t)\|_V \| u'(t)\|_H \,dt + 2 \int_0^{t_1} \|Lu(t)\|_H \|u'(t)\|_H \,dt + 2 \int_0^{t_1} \|h(t)\|_H \|u'(t)\|_H \,dt \\ & \leq \|f_2\|^2_H + C_0\|f_1\|^2_V + (C_0' +C_1) \int_0^{t_1} \| u(t)\|^2_V \,dt \\ & \quad + (C_1 +2) \int_0^{t_1} \| u'(t)\|^2_H \,dt + \|Lu\|^2_{L^2(0,t_1;H)} + \int_0^{t_1} \|h(t)\|^2_H \,dt \\ & \leq \|f_2\|^2_H + C_0\|f_1\|^2_V + (C_0' +C_1+ C_L) \int_0^{t_1} \| u(t)\|^2_V \,dt \\ & \quad + (C_1 +2) \int_0^{t_1} \|u'(t)\|^2_H \,dt + \int_0^{t_1} \|h(t)\|^2_H \,dt. \end{align*} Further, it follows from (\ref{eq:coercivity}) that $$ LHS = \|u'(t_1)\|^2_H + a_0(t_1, u(t_1), u(t_1)) \geq \|u'(t_1)\|^2_H + \mu \|u(t_1)\|^2_V - \lambda \|u(t_1)\|^2_H, $$ and therefore (\ref{eq:srednja}) yields \begin{align*} \|u'(t_1)\|^2_H + \mu \|u(t_1)\|^2_V & \leq \lambda \|u(t_1)\|^2_H + C_0 \|f_1\|^2_V +\|f_2\|^2_H + \int_0^{t_1} \|h(t)\|^2_H \,dt \\ & \qquad + (C_0' +C_1+ C_L) \int_0^{t_1} \| u(t)\|^2_V \,dt + (C_1 +2) \int_0^{t_1} \|u'(t)\|^2_H \,dt. \end{align*} As shown in \cite{HoermannOparnica09} we have that $\|u(t)\|^2_H \leq (1+t)(\|f_1\|^2_V + \int_0^{t} \|u'(s)\|^2_H \,ds)$, hence \begin{align*} \|u(t_1)\|^2_V + \|u'(t_1)\|^2_H \leq D_T \|f_1\|^2_V + \frac{1}{\nu} \left( \|f_2\|^2_H + \int_0^{t_1} \|h(t)\|^2_H \,dt \right) + F_T \int_0^{t_1} (\| u(t)\|^2_V + \|u'(t)\|^2_H )\,dt. \end{align*} where $\nu:=\min\{1,\mu\}$, $D_T:=\frac{C_0 + \lambda (1+ T)}{\nu}$ and $F_T := \max \{\frac{C_0' +C_1+ C_L}{\nu}, \frac{C_1 +2+ \lambda (1+ T)}{\nu} \}$. The claim now follows from Gronwall's lemma. \end{proof} As a consequence of Proposition \ref{prop:EnergyEstimates}, one also has uniqueness of the solution in Lemma \ref{lemma:m-a}. \begin{theorem} Under the assumptions of Lemma \ref{lemma:m-a} there exists a unique $u\in E_V$ satisfying the regularity conditions $u'\in E_V$ and $u''\in L^2(0,T;V')$, and solving the abstract initial value problem (\ref{eq:avp-L})-(\ref{eq:avp-L-ic}). Moreover, $u\in\ensuremath{{\cal C}}([0,T];V)$ and $u'\in\ensuremath{{\cal C}}([0,T];H)$. \end{theorem} \begin{proof} Since existence of a solution is proved in Lemma \ref{lemma:m-a}, it remains to show uniqueness part of the theorem. Thus, let $u$ and $w$ be solutions to the abstract initial value problem (\ref{eq:avp-L})-(\ref{eq:avp-L-ic}), satisfying the regularity conditions $u',w'\in E_V$ and $u'',w''\in L^2(0,T;V')$. Then $u-w$ is a solution to the homogeneous abstract problem with vanishing initial data \begin{align*} &\dis{(u-w''(t)}{v} + a(t,(u-w)(t),v) + \dis{L(u-w)(t)}{v}= 0, \qquad \forall\, v\in V,\, \mbox{ for a.e. } t \in (0,T), \\ & (u-w)(0)=0, \qquad (u-w)'(0) = 0. \end{align*} Moreover, according to Proposition \ref{prop:EnergyEstimates}, $u-w$ satisfies the energy estimates (\ref{eq:EE}) with $f_1=f_2=h\equiv 0$. This implies uniqueness of the solution. \end{proof} \subsection{Basic properties of the operator $L$} \label{ssec:L} In this subsection we analyze our particular form of the operator $L$, relevant to the problem described in the Introduction. Therefore, we consider an operator of convolution type and seek for conditions which guarantee estimate (\ref{eq:L-estimate}). \begin{lemma} \label{lemma:L_1} Let $l\in L^2_{loc}(\mathbb R)$ with $\mathop{\rm supp}\nolimits l \subset [0,\infty)$. Then for all $T_1\in[0,T]$, the operator $L$ defined by $Lu(x,t) := \int_0^t l(s) u(x,t-s)\,ds$ maps $L^2(0,T_1;H)$ into itself, and (\ref{eq:L-estimate}) holds with $C_L = \|l\|_{L^2(0,T)}\cdot T $. \end{lemma} \begin{remark} We may think of $u$ being extended by $0$ outside $[0,T]$ to a function in $L^2(\mathbb R;H)$, and then identify $Lu$ with $l\ast_t u$. \end{remark} \begin{proof} Integration of $\|Lu(t)\|_H^2 \leq \int_0^t |l(t-s)|\|u(s)\|_H \,ds$ from $0$ to $T_1$, $0<T_1\leq T$, yields \begin{multline*} \left( \int_0^{T_1}\|Lu(t)\|_H^2 \,dt \right)^{1/2} \leq \left(\int_0^{T_1} (\int_0^t |l(t-s)|\|u(s)\|_H \,ds)^2 \,dt\right)^{1/2} \\ \leq \left(\int_0^{T_1} (\int_0^{T_1} |l(t-s)|\|u(s)\|_H \,ds)^2 \,dt\right)^{1/2} \leq \int_0^{T_1} \left(\int_0^{T_1} |l(t-s)|^2 \|u(s)\|^2_H \,dt\right)^{1/2} \,ds \\ = \int_0^{T_1} \left( \int_0^{T_1} |l(t-s)|^2 \,dt\right)^{1/2} \|u(s)\|_H \,ds = \|l\|_{L^2(0,T_1)}\cdot \|u\|_{L^1(0,T_1; H)} \\ \leq \|l\|_{L^2(0,T)}\cdot T \cdot \|u\|_{L^2(0,T_1; H)}, \end{multline*} where we have used the support property of $l$, Minkowski's inequality for integrals (c.f \cite[p.\ 194]{Folland}), and the Cauchy-Schwartz inequality. \end{proof}
3,543
31,103
en
train
0.66.5
As a consequence of Proposition \ref{prop:EnergyEstimates}, one also has uniqueness of the solution in Lemma \ref{lemma:m-a}. \begin{theorem} Under the assumptions of Lemma \ref{lemma:m-a} there exists a unique $u\in E_V$ satisfying the regularity conditions $u'\in E_V$ and $u''\in L^2(0,T;V')$, and solving the abstract initial value problem (\ref{eq:avp-L})-(\ref{eq:avp-L-ic}). Moreover, $u\in\ensuremath{{\cal C}}([0,T];V)$ and $u'\in\ensuremath{{\cal C}}([0,T];H)$. \end{theorem} \begin{proof} Since existence of a solution is proved in Lemma \ref{lemma:m-a}, it remains to show uniqueness part of the theorem. Thus, let $u$ and $w$ be solutions to the abstract initial value problem (\ref{eq:avp-L})-(\ref{eq:avp-L-ic}), satisfying the regularity conditions $u',w'\in E_V$ and $u'',w''\in L^2(0,T;V')$. Then $u-w$ is a solution to the homogeneous abstract problem with vanishing initial data \begin{align*} &\dis{(u-w''(t)}{v} + a(t,(u-w)(t),v) + \dis{L(u-w)(t)}{v}= 0, \qquad \forall\, v\in V,\, \mbox{ for a.e. } t \in (0,T), \\ & (u-w)(0)=0, \qquad (u-w)'(0) = 0. \end{align*} Moreover, according to Proposition \ref{prop:EnergyEstimates}, $u-w$ satisfies the energy estimates (\ref{eq:EE}) with $f_1=f_2=h\equiv 0$. This implies uniqueness of the solution. \end{proof} \subsection{Basic properties of the operator $L$} \label{ssec:L} In this subsection we analyze our particular form of the operator $L$, relevant to the problem described in the Introduction. Therefore, we consider an operator of convolution type and seek for conditions which guarantee estimate (\ref{eq:L-estimate}). \begin{lemma} \label{lemma:L_1} Let $l\in L^2_{loc}(\mathbb R)$ with $\mathop{\rm supp}\nolimits l \subset [0,\infty)$. Then for all $T_1\in[0,T]$, the operator $L$ defined by $Lu(x,t) := \int_0^t l(s) u(x,t-s)\,ds$ maps $L^2(0,T_1;H)$ into itself, and (\ref{eq:L-estimate}) holds with $C_L = \|l\|_{L^2(0,T)}\cdot T $. \end{lemma} \begin{remark} We may think of $u$ being extended by $0$ outside $[0,T]$ to a function in $L^2(\mathbb R;H)$, and then identify $Lu$ with $l\ast_t u$. \end{remark} \begin{proof} Integration of $\|Lu(t)\|_H^2 \leq \int_0^t |l(t-s)|\|u(s)\|_H \,ds$ from $0$ to $T_1$, $0<T_1\leq T$, yields \begin{multline*} \left( \int_0^{T_1}\|Lu(t)\|_H^2 \,dt \right)^{1/2} \leq \left(\int_0^{T_1} (\int_0^t |l(t-s)|\|u(s)\|_H \,ds)^2 \,dt\right)^{1/2} \\ \leq \left(\int_0^{T_1} (\int_0^{T_1} |l(t-s)|\|u(s)\|_H \,ds)^2 \,dt\right)^{1/2} \leq \int_0^{T_1} \left(\int_0^{T_1} |l(t-s)|^2 \|u(s)\|^2_H \,dt\right)^{1/2} \,ds \\ = \int_0^{T_1} \left( \int_0^{T_1} |l(t-s)|^2 \,dt\right)^{1/2} \|u(s)\|_H \,ds = \|l\|_{L^2(0,T_1)}\cdot \|u\|_{L^1(0,T_1; H)} \\ \leq \|l\|_{L^2(0,T)}\cdot T \cdot \|u\|_{L^2(0,T_1; H)}, \end{multline*} where we have used the support property of $l$, Minkowski's inequality for integrals (c.f \cite[p.\ 194]{Folland}), and the Cauchy-Schwartz inequality. \end{proof} In the following lemma we discuss a regularization of $L$, which will be used in Section \ref{ssec:Colombeau sol}. \begin{lemma} \label{lemma:reg L} Let $l\in L^1_{loc}(\mathbb R)$ with $\mathop{\rm supp}\nolimits l \subset [0,\infty)$. Let $\rho\in\ensuremath{{\cal D}}(\mathbb R)$ be a mollifier ($\mathop{\rm supp}\nolimits \rho\subset B_1(0)$, $\int \rho =1$). Define $\rho_\varepsilon(t):=\gamma_\varepsilon \rho(\gamma_\varepsilon t)$, with $\gamma_\varepsilon>0$ and $\gamma_\varepsilon\to\infty$ as $\varepsilon\to 0$, $l_\varepsilon := l*\rho_\varepsilon $ and $\tilde{L}_\varepsilon u(t):= (l_\varepsilon *_t u)(t)$, for $u\in E_H$. Then $\forall\,p\in[1,\infty)$, $l_\varepsilon\in L^p_{loc}(\mathbb R)$ and $l_\varepsilon\to l$ in $L^1_{loc}(\mathbb R)$. \end{lemma} \begin{proof} Let $K$ be a compact subset of $\mathbb R$. Then \begin{multline*} \|l_\varepsilon\|_{L^p(K)} = \|l*_t\rho_\varepsilon\|_{L^p(K)} = \left(\int_K |\int_{-\infty}^\infty l(\tau)\rho_\varepsilon(t-\tau)\,d\tau|^p \,dt\right)^{1/p} \\ \leq \left(\int_K \left(\int_{-\infty}^\infty |l(\tau)||\rho_\varepsilon(t-\tau)|\,d\tau\right)^p \,dt\right)^{1/p} \leq \left(\int_K \left( \int_{K+ B_1(0)} |l(\tau)||\rho_\varepsilon(t-\tau)|\,d\tau \right)^p \,dt\right)^{1/p} \\ \leq \int_{K+ B_1(0)} \left(\int_K |l(\tau)|^p|\rho_\varepsilon(t-\tau)|^p \,dt\right)^{1/p} \,d\tau = \int_{K+ B_1(0)} |l(\tau)|\left(\int_K |\rho_\varepsilon(t-\tau)|^p \,dt\right)^{1/p} \,d\tau \\ = \int_{K+ B_1(0)} |l(\tau)| \|\rho_\varepsilon\|_{L^p(B_1(0))} \,d\tau = \|l\|_{L^1(K+B_1(0))} \|\rho_\varepsilon\|_{L^p(B_1(0))} \\ = \|l\|_{L^1(K+B_1(0))} \cdot \gamma_\varepsilon^{1-\frac{1}{p}} \cdot \|\rho\|_{L^p(B_1(0))}. \end{multline*} where the second inequality follows from the support properties of $l$ and $\rho$ ($t-\tau\in B_1(0), t\in K$ implies $\tau\in K+B_1(0)$), while for the third inequality we used Minkowski's inequality for integrals. Further, we shall show that $l_\varepsilon\to l$ in $L^1_{loc}(\mathbb R)$. Let $K\subset\subset\mathbb R$. We claim that $\int_K|l_\varepsilon-l|\to 0 $, as $\varepsilon\to 0$. Indeed, \begin{multline*} \int_K |\int_\mathbb R l(t-s)\rho_\varepsilon(s)\,ds - l(t)\cdot\int_\mathbb R\rho_\varepsilon(s)\,ds|\,dt = \int_K |\int_\mathbb R (l(t-s)-l(t))\rho_\varepsilon(s)\,ds|\,dt\\ \stackrel{[\gamma_\varepsilon s=\tau]}{=} \int_K |\int_\mathbb R (l(t-\frac{\tau}{\gamma_\varepsilon}) - l(t))\rho(\tau)\,d\tau|\,dt \leq \int_K \int_\mathbb R |l(t-\frac{\tau}{\gamma_\varepsilon}) - l(t)||\rho(\tau)|\,d\tau \,dt\\ = \int_\mathbb R |\rho(\tau)| \int_K |l(t-\frac{\tau}{\gamma_\varepsilon})-l(t)|\,dt \,d\tau. \end{multline*} By \cite[Prop.\ 8.5]{Folland}, we have that $\|l(\cdot-\frac{\tau}{\gamma_\varepsilon})-l\|_{L^1(K)}\to 0$, as $\varepsilon\to 0$ and therefore the integrand converges to 0 pointwise almost everywhere. Since it is also bounded by $2|\rho(\tau)|\|l\|_{L^1(K)}\in L^1(\mathbb R)$, Lebesgue's dominated convergence theorem implies the result. \end{proof}
2,490
31,103
en
train
0.66.6
In the following lemma we discuss a regularization of $L$, which will be used in Section \ref{ssec:Colombeau sol}. \begin{lemma} \label{lemma:reg L} Let $l\in L^1_{loc}(\mathbb R)$ with $\mathop{\rm supp}\nolimits l \subset [0,\infty)$. Let $\rho\in\ensuremath{{\cal D}}(\mathbb R)$ be a mollifier ($\mathop{\rm supp}\nolimits \rho\subset B_1(0)$, $\int \rho =1$). Define $\rho_\varepsilon(t):=\gamma_\varepsilon \rho(\gamma_\varepsilon t)$, with $\gamma_\varepsilon>0$ and $\gamma_\varepsilon\to\infty$ as $\varepsilon\to 0$, $l_\varepsilon := l*\rho_\varepsilon $ and $\tilde{L}_\varepsilon u(t):= (l_\varepsilon *_t u)(t)$, for $u\in E_H$. Then $\forall\,p\in[1,\infty)$, $l_\varepsilon\in L^p_{loc}(\mathbb R)$ and $l_\varepsilon\to l$ in $L^1_{loc}(\mathbb R)$. \end{lemma} \begin{proof} Let $K$ be a compact subset of $\mathbb R$. Then \begin{multline*} \|l_\varepsilon\|_{L^p(K)} = \|l*_t\rho_\varepsilon\|_{L^p(K)} = \left(\int_K |\int_{-\infty}^\infty l(\tau)\rho_\varepsilon(t-\tau)\,d\tau|^p \,dt\right)^{1/p} \\ \leq \left(\int_K \left(\int_{-\infty}^\infty |l(\tau)||\rho_\varepsilon(t-\tau)|\,d\tau\right)^p \,dt\right)^{1/p} \leq \left(\int_K \left( \int_{K+ B_1(0)} |l(\tau)||\rho_\varepsilon(t-\tau)|\,d\tau \right)^p \,dt\right)^{1/p} \\ \leq \int_{K+ B_1(0)} \left(\int_K |l(\tau)|^p|\rho_\varepsilon(t-\tau)|^p \,dt\right)^{1/p} \,d\tau = \int_{K+ B_1(0)} |l(\tau)|\left(\int_K |\rho_\varepsilon(t-\tau)|^p \,dt\right)^{1/p} \,d\tau \\ = \int_{K+ B_1(0)} |l(\tau)| \|\rho_\varepsilon\|_{L^p(B_1(0))} \,d\tau = \|l\|_{L^1(K+B_1(0))} \|\rho_\varepsilon\|_{L^p(B_1(0))} \\ = \|l\|_{L^1(K+B_1(0))} \cdot \gamma_\varepsilon^{1-\frac{1}{p}} \cdot \|\rho\|_{L^p(B_1(0))}. \end{multline*} where the second inequality follows from the support properties of $l$ and $\rho$ ($t-\tau\in B_1(0), t\in K$ implies $\tau\in K+B_1(0)$), while for the third inequality we used Minkowski's inequality for integrals. Further, we shall show that $l_\varepsilon\to l$ in $L^1_{loc}(\mathbb R)$. Let $K\subset\subset\mathbb R$. We claim that $\int_K|l_\varepsilon-l|\to 0 $, as $\varepsilon\to 0$. Indeed, \begin{multline*} \int_K |\int_\mathbb R l(t-s)\rho_\varepsilon(s)\,ds - l(t)\cdot\int_\mathbb R\rho_\varepsilon(s)\,ds|\,dt = \int_K |\int_\mathbb R (l(t-s)-l(t))\rho_\varepsilon(s)\,ds|\,dt\\ \stackrel{[\gamma_\varepsilon s=\tau]}{=} \int_K |\int_\mathbb R (l(t-\frac{\tau}{\gamma_\varepsilon}) - l(t))\rho(\tau)\,d\tau|\,dt \leq \int_K \int_\mathbb R |l(t-\frac{\tau}{\gamma_\varepsilon}) - l(t)||\rho(\tau)|\,d\tau \,dt\\ = \int_\mathbb R |\rho(\tau)| \int_K |l(t-\frac{\tau}{\gamma_\varepsilon})-l(t)|\,dt \,d\tau. \end{multline*} By \cite[Prop.\ 8.5]{Folland}, we have that $\|l(\cdot-\frac{\tau}{\gamma_\varepsilon})-l\|_{L^1(K)}\to 0$, as $\varepsilon\to 0$ and therefore the integrand converges to 0 pointwise almost everywhere. Since it is also bounded by $2|\rho(\tau)|\|l\|_{L^1(K)}\in L^1(\mathbb R)$, Lebesgue's dominated convergence theorem implies the result. \end{proof} \section{Weak and generalized solutions of the model equations} \label{sec:EBmodel} We now come back to the problem (\ref{eq:PDE})-(\ref{eq:FDE})-(IC)-(BC) or (\ref{eq:IntegroPDE})-(IC)-(BC), and hence need to provide assumptions which guarantee that it can be interpreted in the form (\ref{eq:avp-L}), in order to the apply results obtained above. For that purpose we need to prescribe the regularity of the functions $c$ and $b$ which appear in $Q$. In Section \ref{ssec:Colombeau sol} we shall use these results on the level of representatives to prove existence of solutions in the Colombeau generalized setting. Thus, let $H := L^2(0,1)$ with the standard scalar product $\dis{u}{v}=\int_0^1u(x)v(x)\,dx$ and $L^2$-norm denoted by $\|\cdot\|_H$. Let $V$ be the Sobolev space $H^2_0((0,1))$, which is the completion of the space of compactly supported smooth functions $C^\infty_c((0,1))$ with respect to the norm $\|u\|_{2} = (\sum_{k= 0}^2 \|u^{(k)}\|^2)^{1/2}$ (and inner product $(u,v) \mapsto \sum_{k=0}^2\dis{u^{(k)}}{v^{(k)}}$). Then $V'= H^{-2}((0,1))$, which consists of distributional derivatives up to second order of functions in $L^2(0,1)$, and $V\hookrightarrow H \hookrightarrow V'$ forms a Gelfand triple. With this choice of spaces $H$ and $V$ we also have that $E_V=L^2(0,T; H^2_0((0,1)))$ and $E_H=L^2((0,1)\times (0,T))$. Let \begin{equation} \label{HypothesisOn_c_and_b} c\in L^\infty(0,1) \mbox{ and real}, \qquad b\in C([0,T];L^{\infty}(0,1)), \end{equation} and suppose that there exist constants $c_1 > c_0 > 0$ such that \begin{equation} \label{addHypothesisOn_c} 0 < c_0\leq c(x)\leq c_1, \qquad \mbox{ for almost every } x. \end{equation} For $t\in [0,T]$ we define the bilinear forms $a(t,\cdot,\cdot)$, $a_0(t,\cdot,\cdot)$ and $a_1(t,\cdot,\cdot)$ on $V\times V$ by \begin{equation} \label{sesforma} a_0(t,u,v) = \dis{c(x)\, \ensuremath{\partial}_x^2u}{\ensuremath{\partial}_x^2v}, \qquad a_1(t,u,v) = \dis{b(x,t)\, \ensuremath{\partial}_x^2u}{v}, \qquad u,v\in V, \end{equation} and \begin{equation} \label{sesform2} a(t,u,v) = a_0(t,u,v) + a_1(t,u,v). \end{equation} Properties (\ref{HypothesisOn_c_and_b}), (\ref{addHypothesisOn_c}) imply that $a_0$, $a_1$ defined as in (\ref{sesforma}) satisfy the conditions of Assumption \ref{Ass1} (cf.\ \cite[proof of Th.\ 2.2]{HoermannOparnica09}). The specific form of the operator $L$ is designed to achieve equivalence of the system (\ref{eq:PDE})-(\ref{eq:FDE}) with the equation (\ref{eq:IntegroPDE}), which we show in the sequel. Let $\ensuremath{{\cal S}}'_+$ denote the space of Schwartz' distributions supported in $[0,\infty)$. It is known (c.f. \cite{Oparnica02}) that for given $z\in \ensuremath{{\cal S}}'_+$ there is a unique $y\in\ensuremath{{\cal S}}'_+$ such that $D_t^\alpha z + z = \theta D_t^\alpha y + y$. Moreover, it is given by $y=\tilde{L}z$, where $\tilde{L}$ is linear convolution operator acting on $\ensuremath{{\cal S}}'_+$ as \begin{equation} \label{operatorLonEs} \tilde{L}z: =\ILT \left(\frac{1 + s^{\alpha}}{1 + \theta s^{\alpha}}\right) \ast_t z, \qquad z\in\ensuremath{{\cal S}}'_+. \end{equation} The following lemma extends the operator $\tilde{L}$ to the space $E_H$. \begin{lemma} Let $\tilde{L}:\ensuremath{{\cal S}}'_+ \to \ensuremath{{\cal S}}'_+$ be defined as in (\ref{operatorLonEs}). Then $\tilde{L}$ induces a continuous operator $L=\mathop{\rm Id}\nolimits+L_\alpha$ on $E_H$, where $L_\alpha$ corresponds to convolution in time variable with a function $l_\alpha\in L^1_{loc}([0,\infty))$. \end{lemma} \begin{proof} Recall that for the Mittag-Leffler function $e_\alpha(t,\lambda)$, defined by $$ e_\alpha(t,\lambda) = \sum_{k=0}^\infty \frac{(-\lambda t^\alpha)^k}{\Gamma(\alpha k+1)}, $$ we have that $\LT (e_{\alpha}(t,\lambda))(s) =\frac{s^{\alpha-1}}{s^{\alpha} + \lambda}$, $e_{\alpha}\in C^\infty((0,\infty))\cap C([0,\infty))$ and $e'_{\alpha} \in C^\infty((0,\infty))\cap L^1_{loc}([0,\infty))$ (cf.\ \cite{MainardiGorenflo2000}). Also, $$ \ILT \left(\frac{1+s^{\alpha}}{1+\theta s^{\alpha}}\right)(t)= \ILT\left( 1 + \frac{(1-\theta)s^\alpha}{\theta (s^\alpha + \frac{1}{\theta})} \right)(t) = \delta(t) + \left(\frac{1}{\theta}-1\right) e_\alpha'\left(t,\frac{1}{\theta}\right) =: \delta(t)+l_\alpha(t). $$ Let $ u\in E_H$. Then \begin{equation} \ILT \left(\frac{1+ s^{\alpha}}{1+ \theta s^{\alpha}}\right)(\cdot) \ast_t u (x,\cdot) = u(x,\cdot) + \left(\frac{1}{\theta}-1\right) e_\alpha' \ast u(x,\cdot) \end{equation} is an element in $L^2(0,T)$ for almost all $x$ (use Fubini's theorem, $e'_{\alpha} \in L^1(0,T)$ and $L^1\ast L^p\subset L^p$ (cf.\ \cite{Folland})). Extend this to a measurable function on $(0,1)\times(0,T)$, denoted by $Lu$. By Young's inequality we have \begin{align*} \|(Lu)(x,\cdot)\|_{L^2(0,T)} & \leq \|u(x,\cdot)\|_{L^2(0,T)} +|\frac{1}{\theta}-1|\|e_\alpha'\ast u(x,\cdot)\|_{L^2(0,T)} \\ & \leq \|u(x,\cdot)\|_{L^2(0,T)} + |\frac{1}{\theta}-1| \|e'_\alpha\|_{L^1(0,T)} \|u(x,\cdot)\|_{L^2(0,T)}, \end{align*} hence, \begin{equation} \label{Lbound} \|Lu\|_{E_H} \leq (1 + |\frac{1}{\theta}-1| \|e'_\alpha\|_{L^1(0,T)}) \|u\|_{E_H}. \end{equation} Thus, $Lu\in E_H$ and $L$ is continuous on $E_H$. \end{proof}
3,358
31,103
en
train
0.66.7
We may write \begin{equation} \label{eq:opL} Lu := (\mathop{\rm Id}\nolimits+L_\alpha)u = l\ast_t u = (\delta + l_\alpha)\ast_t u \quad \mbox{ with } \quad L_\alpha u := l_\alpha\ast_t u, \; l_\alpha:= (\frac{1}{\theta} - 1) e_\alpha' (t,\frac{1}{\theta}), \end{equation} and therefore the model system (\ref{eq:PDE})-(\ref{eq:FDE}) is equivalent to Equation (\ref{eq:IntegroPDE}). \subsection{Weak solutions for $L^{\infty}$ coefficients} \label{ssec:weak sol} Now we are in a position to apply the abstract results from the previous section to the original problem. \begin{theorem} \label{th:weak sol} Let $b$ and $c$ be as in (\ref{HypothesisOn_c_and_b}) and (\ref{addHypothesisOn_c}). Let the bilinear form $a(t,\cdot,\cdot)$, $t\in [0,T]$, be defined by (\ref{sesforma}) and (\ref{sesform2}), and the operator $L$ as in (\ref{eq:opL}). Let $f_1\in H_0^2((0,1))$, $f_2\in L^2(0,1)$ and $h\in L^2((0,1)\times(0,T))$. Then there exists a unique $u\in L^2(0,T;H_0^2(0,1))$ satisfying \begin{equation} \label{sol_reg} u' = \frac{du}{dt} \in L^2(0,T;H_0^2(0,1)), \qquad u'' = \frac{d^2 u}{dt^2}\in L^2(0,T;H^{-2}(0,1)), \end{equation} and solving the initial value problem \begin{align} & \dis{u''(t)}{v} + a(t,u(t),v) + \dis{Lu(t)}{v} = 0, \qquad \forall\, v\in H_0^2((0,1)),\, t \in (0,T), \label{vf}\\ & u(0)=f_1, \qquad u'(0) = f_2. \label{vf_ini} \end{align} (Note that, as in the abstract version, since (\ref{sol_reg}) implies $u \in C([0,T],H^2_0((0,1)))$ and $u' \in C([0,T],H^{-2}((0,1)))$ it makes sense to evaluate $u(0)\in H^2_0((0,1))$ and $u'(0) \in H^{-2}((0,1))$ and (\ref{vf_ini}) claims that these equal $f_1$ and $f_2$, respectively.) \end{theorem} \begin{proof} We may apply Lemma \ref{lemma:m-a} because the bilinear form $a$ and the operator $L$ satisfy Assumption \ref{Ass1} and condition (\ref{eq:L-estimate}). The latter is true according to (\ref{Lbound}) with $C_L=(1 + |\frac{1}{\theta}-1| \|e'_\alpha\|_{L^1(0,T)}) =1+\|l_\alpha\|_{L^2(0,T)}$. As noted earlier, the bilinear forms $a$, $a_0$ and $a_1$ are as in \cite[(20) and (21)]{HoermannOparnica09}. Moreover, it follows as in the proof of \cite[Theorem\ 2.2]{HoermannOparnica09} that $a$ satisfies Assumption \ref{Ass1} with \begin{equation} \label{eq:constants} C_0:=\|c\|_{L^\infty(0,1)}, \quad C_0':=0, \quad C_1:=\|b\|_{L^\infty((0,1)\times(0,T))}, \quad \mu:=\frac{c_0}{2}, \quad \lambda:=C_{1/2}\cdot c_0, \end{equation} where $C_{1/2}$ is corresoponding constant form Ehrling's lemma. \end{proof} We briefly recall two facts about the solution $u$ obtained in Theorem \ref{th:weak sol} (as noted similarly in \cite[Section 2]{HoermannOparnica09}): (i) Since $u(.,t) \in H^2_0((0,1))$ for all $t \in [0,T]$ and $H^2_0((0,1))$ is continuously embedded in $\{v \in C^1([0,1]): v(0,t) = v(1,t) = 0, \ensuremath{\partial}_x v(0,t) = \ensuremath{\partial}_x v(1,t) = 0\}$ (\cite[Corollary 6.2]{Wloka}) the solution $u$ automatically satisfies the boundary conditions. (ii) The properties in (\ref{sol_reg}) imply that $u$ belongs to $C^1([0,T],H^{-2}((0,1))) \cap L^2((0,T)\times(0,1))$, which is a subspace of $\ensuremath{{\cal D}}'((0,1)\times(0,T))$. Thus in case of smooth coefficients $b$ and $c$ we obtain a distributional solution to the ``integro-differential'' equation $$ \ensuremath{\partial}_t^2 u + \ensuremath{\partial}^2_x(c \,\ensuremath{\partial}^2_x)u + b \,\ensuremath{\partial}_x^2 u + l *_t u = h. $$ \subsection{Colombeau generalized solutions} \label{ssec:Colombeau sol} We will prove unique solvability of Equation (\ref{eq:IntegroPDE}) (or equivalently, of Equations (\ref{eq:PDE})-(\ref{eq:FDE})) with (IC) and (BC) for $u \in \ensuremath{{\cal G}}_{H^{\infty}(X_T)}$ when $b,c, f_1, f_2, g$ and $h$ are Colombeau generalized functions, where $X_T:=(0,1)\times (0,T)$. In more detail, we find a unique solution $u\in\ensuremath{{\cal G}}_{H^{\infty}(X_T)}$ to the equation $$ \ensuremath{\partial}^2_tu + Q(t,x,\ensuremath{\partial}_x)u + Lu = h, \qquad \mbox{ on } X_T $$ with initial conditions $$ u|_{t=0} = f_1\in \ensuremath{{\cal G}}_{H^{\infty}((0,1))}, \qquad \ensuremath{\partial}_t u|_{t=0} = f_2\in \ensuremath{{\cal G}}_{H^{\infty}((0,1))} $$ and boundary conditions $$ u|_{x=0} =u|_{x=1}=0, \qquad \ensuremath{\partial}_x u|_{x=0} = \ensuremath{\partial}_x u|_{x=1}=0. $$ Here $Q$ is a partial differential operator on $\ensuremath{{\cal G}}_{H^{\infty}(X_T)}$ with generalized functions as coefficients, defined by its action on representatives in the form $$ (u_\varepsilon)_\varepsilon \mapsto \left( \ensuremath{\partial}_x^2(c_\varepsilon(x)\ensuremath{\partial}_x^2 u_\varepsilon)) + b_\varepsilon(x,t)\ensuremath{\partial}_x^2 (u_\varepsilon) \right)_\varepsilon =: (Q_\varepsilon u_\varepsilon)_\varepsilon. $$ Furthermore, the operator $L$ corresponds to convolution on the level of representatives with regularizations of $l$ as given in Lemma \ref{lemma:reg L}: $$ (u_\varepsilon)_\varepsilon \mapsto \left( l_\varepsilon\ast_t u_\varepsilon (t) \right)_\varepsilon =: (L_\varepsilon u_\varepsilon)_\varepsilon, $$ where $l_\varepsilon=l\ast \rho_\varepsilon$, with $\rho_\varepsilon$ introduced in Lemma \ref{lemma:reg L}. \begin{lemma} (i)\ If $l\in L^2_{loc}(\mathbb R)$ and $l$ is $\ensuremath{{\cal C}}^\infty$ in $(0,\infty)$ then $L$ is a continuous operator on $H^{\infty}(X_T)$. Thus $(u_\varepsilon)_\varepsilon\mapsto (Lu_\varepsilon)_\varepsilon$ defines a linear map on $\ensuremath{{\cal G}}_{H^\infty(X_T)}$. (ii)\ If $l\in L^1_{loc}(\mathbb R)$ then $\forall\,\varepsilon\in(0,1]$ the operator $L_\varepsilon$ is continuous on $H^{\infty}(X_T)$ and $(u_\varepsilon)_\varepsilon\mapsto (Lu_\varepsilon)_\varepsilon$ defines a linear map on $\ensuremath{{\cal G}}_{H^\infty(X_T)}$. \end{lemma} \begin{proof} (i)\ From Lemma \ref{lemma:L_1} with $H=L^2(0,1)$ we have that $L$ is continuous on $L^2(X_T)$ with operator norm $\|L\|_{op}\leq T\cdot \|l\|_{L^2(0,T)}$. Let $u\in H^\infty(X_T)$ and $Lu(x,t)=\int_0^t l(s)u(x,t-s)\,ds$. We have to show that all derivatives of $Lu$ with respect to both $x$ and $t$ are in $L^2(X_T)$. \begin{itemize} \item $\ensuremath{\partial}_x^l Lu(x,t)=\int_0^t l(s) \ensuremath{\partial}_x^l u(x,t-s)\,ds$, and hence, $\|\ensuremath{\partial}_x^l Lu\|_{L^2(X_T)}\leq T\cdot \|l\|_{L^2(0,T)} \|\ensuremath{\partial}_x^l u\|_{L^2(X_T)}$. \item $\ensuremath{\partial}_t^k \ensuremath{\partial}_x^l Lu=\ensuremath{\partial}_t^k L(\ensuremath{\partial}_x^l u)$, and since the estimates for $L(\ensuremath{\partial}_x^l u)$ are known it suffices to consider only terms $\ensuremath{\partial}_t^k Lu$. For the first order derivative we have $$ \ensuremath{\partial}_t Lu(x,t) = l(t)u(x,0) + \int_0^t l(s)\ensuremath{\partial}_t u(x,t-s)\, ds\\ $$ and therefore \begin{eqnarray}s \|\ensuremath{\partial}_t Lu\|_{L^2(X_T)} &\leq& \|l\|_{L^2(0,T)} \|u(\cdot,0)\|_{L^2(0,1)} + T\cdot \|l\|_{L^2(0,T)} \|\ensuremath{\partial}_t u\|_{L^2(X_T)} \\ &\leq& \|l\|_{L^2(0,T)} (\|u\|_{H^m(X_T)} + T\cdot \|u\|_{H^1(X_T)}), \end{eqnarray}s where we have used the fact that $\mathop{\rm Tr}\nolimits:H^\infty(X_T)\to H^\infty((0,1))$, $u\mapsto u(\cdot,0)$ is continuous, and more precisely, $\mathop{\rm Tr}\nolimits:H^m(X_T)\to H^{m-1}((0,1))$ with estimates $\|\ensuremath{\partial}_x^l \ensuremath{\partial}_t^k u(\cdot,0)\|_{L^2(0,1)}\leq \|u\|_{H^m(X_T)}$, $m=m(k,l)$. Higher order derivatives involve terms $l^{(r)}(t) \ensuremath{\partial}_t^p u(x,0), \ldots, \int_0^t l(s) \ensuremath{\partial}_t^p u(x,t-s)\, ds$, which can be estimated as above. \end{itemize} (ii)\ From Lemma \ref{lemma:reg L} it follows that $l_\varepsilon\in L^2_{loc}(\mathbb R)$, and $\|l_\varepsilon\|_{L^2(0,T)}\leq\gamma_\varepsilon^\frac12 \cdot \|l\|_{L^1(0,T)} \|\rho\|_{L^2(0,T)}$. From Lemma \ref{lemma:L_1} we know that $L_\varepsilon$ is continuous $X_T\to X_T$, with $\|L_\varepsilon\|_{op}\leq T\cdot \|l_\varepsilon\|_{L^2(0,T)} \leq T\cdot \gamma_\varepsilon^\frac12 \cdot \|l\|_{L^1(0,T)} \|\rho\|_{L^2(0,T)}$, which is moderate. We can now proceed as in (i) to produce estimates of $\|L_\varepsilon u_\varepsilon\|_{H^r(X_T)}$, $\forall\,r\in\mathbb N$, always replacing $\|l\|_{L^2(0,T)}$ by $\gamma_\varepsilon^\frac12 \cdot \|l\|_{L^1(0,T)} \|\rho\|_{L^2(0,T)}$ factors. Since $\gamma_\varepsilon\leq \varepsilon^{-N}$ it follows that $(L_\varepsilon u_\varepsilon)_\varepsilon \in \ensuremath{{\cal E}}_{H^\infty(X_T)}$. \end{proof}
3,390
31,103
en
train
0.66.8
\begin{remark} The function $l$ as defined in (\ref{eq:opL}) belongs to $L^2_{loc}(\mathbb R)$, if $\alpha>1/2$, and to $L^1_{loc}(\mathbb R)$, if $\alpha\leq 1/2$ (which follows from the explicit form of $e_\alpha'(t,\frac{1}{\theta})$). This means that in case $\alpha > 1/2$ we could in fact define the operator $L$ without regularization of $l$. \end{remark} As in the classical case we also have to impose a condition to ensure compatibility of initial with boundary values, namely (as equation in generalized numbers) \begin{equation} \label{compatibility} f_1(0) = f_1(1) = 0. \end{equation} Note that if $f_1 \in \ensuremath{{\cal G}}_{H^\infty((0,1))}$ satisfies (\ref{compatibility}) then there is some representative $(f_{1,\varepsilon})_\varepsilon$ of $f_1$ with the property $f_{1,\varepsilon} \in H^2_0((0,1))$ for all $\varepsilon \in\, (0,1)$ (cf.\ the discussion right below Equation (28) in \cite{HoermannOparnica09}). Motivated by condition (\ref{addHypothesisOn_c}) above on the bending stiffness we assume the following about $c$: There exist real constants $c_1 > c_0 > 0$ such that $c\in \ensuremath{{\cal G}}_{H^{\infty}(0,1))}$ possesses a representative $(c_{\varepsilon})_\varepsilon$ satisfying \begin{equation} \label{eq:c-1-2} 0 < c_0 \leq c_{\varepsilon}(x) \leq c_1 \qquad \forall\, x\in (0,1), \forall\, \varepsilon \in\, (0,1]. \end{equation} (Hence any other representative of $c$ has upper and lower bounds of the same type.) As in many evolution-type problems with Colombeau generalized functions we also need the standard assumption that $b$ is of $L^\infty$-log-type (similar to \cite{MO-89}), which means that for some (hence any) representative $(b_\varepsilon)_\varepsilon$ of $b$ there exist $N\in\mathbb N$ and $\varepsilon_0 \in (0,1]$ such that \begin{equation} \label{log_type} \|b_\varepsilon\|_{L^\infty(X_T)} \leq N\cdot \log(\frac{1}{\varepsilon}), \qquad 0 < \varepsilon \leq \varepsilon_0. \end{equation} It has been noted already in \cite[Proposition 1.5]{MO-89} that log-type regularizations of distributions are obtained in a straight-forward way by convolution with logarithmically scaled mollifiers. \begin{theorem} \label{th:main} Let $b\in \ensuremath{{\cal G}}_{H^{\infty}(X_T)}$ be of $L^\infty$-log-type and $c\in \ensuremath{{\cal G}}_{H^{\infty}(0,1))}$ satisfy (\ref{eq:c-1-2}). Let $\gamma_\varepsilon=O(\log \frac{1}{\varepsilon})$. For any $f_{1} \in \ensuremath{{\cal G}}_{H^{\infty}((0,1))}$ satisfying (\ref{compatibility}), $f_{2}\in \ensuremath{{\cal G}}_{H^{\infty}((0,1))}$, $h\in \ensuremath{{\cal G}}_{H^{\infty}(X_T)}$ and $l\in\ensuremath{{\cal G}}_{H^{\infty}((0,1))}$, there is a unique solution $u \in \ensuremath{{\cal G}}_{H^{\infty}(X_T)}$ to the initial-boundary value problem \begin{align*} & \ensuremath{\partial}^2_tu + Q(t,x,\ensuremath{\partial}_x)u + Lu = h, \\ & u|_{t=0} = f_1, \quad \ensuremath{\partial}_t u|_{t=0} = f_2, \\ & u|_{x=0} =u|_{x=1}=0, \quad \ensuremath{\partial}_x u|_{x=0} = \ensuremath{\partial}_x u|_{x=1}=0. \end{align*} \end{theorem} \begin{proof} Thanks to the preparations a considerable part of the proof may be adapted from the corresponding proof in \cite[Theorem 3.1]{HoermannOparnica09}. Therefore we give details only for the first part and sketch the procedure from there on. {\bf Existence:} \enspace \enspace We choose representatives $(b_\varepsilon)_\varepsilon$ of $b$ and $(c_{\varepsilon})_\varepsilon$ of $c$ satisfying (\ref{eq:c-1-2}) and (\ref{log_type}). Further let $(f_{1\varepsilon})_\varepsilon$, $(f_{2\varepsilon})_\varepsilon$, $(l_{\varepsilon})_\varepsilon$, and $(h_{\varepsilon})_\varepsilon$ be representatives of $f_1$,$f_2$, $l$, and $g$, respectively, where we may assume $f_{1,\varepsilon} \in H^2_0((0,1))$ for all $\varepsilon \in \,(0,1)$ (cf.\ (\ref{compatibility})). For every $\varepsilon\in (0,1]$ Theorem \ref{th:weak sol} provides us with a unique solution $u_{\varepsilon}\in H^1((0,T),H^2_0((0,1))) \cap H^2((0,T),H^{-2}((0,1)))$ to \begin{align} P_{\varepsilon}u_{\varepsilon} &: = \ensuremath{\partial}^2_t u_{\varepsilon} + Q_{\varepsilon}(t,x,\ensuremath{\partial}_x)u_{\varepsilon} + L_\varepsilon u_\varepsilon = h_{\varepsilon} \qquad \mbox{ on } X_T, \label{Peps}\\ & u_{\varepsilon}|_{t=0} = f_{1\varepsilon}, \qquad \ensuremath{\partial}_tu_{\varepsilon}|_{t=0} = f_{2\varepsilon}. \nonumber \end{align} In particular, we have $u_{\varepsilon}\in C^1([0,T],H^{-2}((0,1))) \cap C([0,T],H^2_0((0,1)))$. Proposition \ref{prop:EnergyEstimates} implies the energy estimate \begin{equation}\label{eq:EEeps1} \|u_{\varepsilon}(t)\|_{H^2}^2 + \|u_{\varepsilon}'(t)\|_{L^2}^2 \leq \begin{itemize}g( D_T^\varepsilon\, \|f_{1\varepsilon}\|_{H^2}^2 + \|f_{2,\varepsilon}\|_{L^2}^2 + \int_0^t \|h_{\varepsilon}(\tau)\|_{L^2}^2\,d\tau \begin{itemize}g) \cdot \exp(t\, F_T^\varepsilon), \end{equation} where with some $N$ we have as $\varepsilon \to 0$ \begin{align} D_T^\varepsilon &= (\|c_\varepsilon\|_{L^{\infty}} + \lambda (1+T)) /\min(\mu,1) = O(\|c_\varepsilon\|_{L^{\infty}}) = O(1) \label{const_D_eps} \\ F_T^\varepsilon &= \frac{\max \{\|b_\varepsilon\|_{L^{\infty}}+C_{L,\varepsilon}, \|b_\varepsilon\|_{L^{\infty}} +2+\lambda(1+T) \}}{\min(\mu,1)} = O(C_{L,\varepsilon}+\|b_\varepsilon\|_{L^{\infty}}) = O(\log(\varepsilon^{-N})), \label{const_F_eps} \end{align} since $\mu$ and $\lambda$ are independent of $\varepsilon$, and $C_{L,\varepsilon} = O(\log \frac{1}{\varepsilon})$ (cf.\ (\ref{eq:constants})). By moderateness of the initial data $f_{1\varepsilon}$, $f_{2\varepsilon}$ and of the right-hand side $h_{\varepsilon}$ the inequality (\ref{eq:EEeps1}) thus implies that there exists $M$ such that for small $\varepsilon > 0$ we have \begin{equation}\label{EEeps} \|u_{\varepsilon}\|^2_{L^2(X_T)} + \|\ensuremath{\partial}_x u_{\varepsilon}\|^2_{L^2(X_T)} + \| \ensuremath{\partial}_x^2 u_{\varepsilon}\|^2_{L^2(X_T)} + \|\ensuremath{\partial}_t u_{\varepsilon}\|^2_{L^2(X_T)} = O (\varepsilon^{-M}), \qquad \varepsilon\to 0. \end{equation} From here on the remaining chain of arguments proceeds along the lines of the proof in \cite[Theorem 3.1]{HoermannOparnica09}. We only indicate a few key points requiring certain adaptions. The goal is to prove the following properties: \begin{itemize} \item[1.)] For every $\varepsilon \in (0,1]$ we have $u_{\varepsilon}\in H^{\infty}(X_T) \subseteq C^{\infty}(\overline{X_T})$. \item[2.)] Moderateness, i.e.\ for all $l,k\in\mathbb N$ there is some $M\in\mathbb N$ such that for small $\varepsilon > 0$ \begin{equation} \tag{$T_{l,k}$} \label{T_lk} \|\ensuremath{\partial}^l_t\ensuremath{\partial}^k_xu_{\varepsilon}\|_{L^2(X_T)} = O(\varepsilon^{-M}). \end{equation} Note that (\ref{EEeps}) already yields (\ref{T_lk}) for $(l,k) \in \{ (0,0),(1,0),(0,1),(0,2)\}$. \end{itemize} \noindent\emph{Proof of 1.)} Differentiating (\ref{Peps}) (considered as an equation in $\ensuremath{{\cal D}}'((0,1)\times(0,T))$) with respect to $t$ we obtain $$ P_{\varepsilon}(\ensuremath{\partial}_tu_{\varepsilon}) = \ensuremath{\partial}_th_{\varepsilon} - \ensuremath{\partial}_t b_\varepsilon(x,t)\ensuremath{\partial}_x^2u_{\varepsilon} - l_\varepsilon(t) f_{1,\varepsilon} =: \tilde{h}_\varepsilon, $$ where we used $\ensuremath{\partial}_t(L_\varepsilon u_\varepsilon)= L_\varepsilon (\ensuremath{\partial}_t u_\varepsilon) + l_\varepsilon(t)u_\varepsilon(0)$. We have $\tilde{h}_\varepsilon \in H^1((0,T),L^2(0,1))$ since $\ensuremath{\partial}_t h_{\varepsilon}\in H^{\infty}(X_T)$, $l_\varepsilon\in H^{\infty}((0,T))$, $f_{1,\varepsilon}\in H^{\infty}((0,1))$, $\ensuremath{\partial}_t b_\varepsilon(x,t)\in H^{\infty}(X_T) \subset W^{\infty,\infty}(X_T)$ and $\ensuremath{\partial}^2_xu_{\varepsilon}\in H^1((0,T),L^2(0,1))$. Furthermore, since $Q_\varepsilon$ depends smoothly on $t$ as a differential operator in $x$ and $u_\varepsilon(0) = f_{1,\varepsilon} \in H^{\infty}((0,1))$ we have \begin{align*} (\ensuremath{\partial}_t u_{\varepsilon})(\cdot,0) & = f_{2,\varepsilon} =: \tilde{f}_{1,\varepsilon} \in H^{\infty}((0,1)),\\ (\ensuremath{\partial}_t(\ensuremath{\partial}_tu_{\varepsilon}))(\cdot,0) & = h_{\varepsilon}(\cdot,0) - Q_{\varepsilon}(u_{\varepsilon}(\cdot,0))-L_\varepsilon u_\varepsilon (\cdot,0)= h_{\varepsilon}(\cdot,0) - (Q_{\varepsilon}+L_\varepsilon) f_{1,\varepsilon} := \tilde{f}_{2,\varepsilon} \in H^{\infty}((0,1)). \end{align*} Hence $\ensuremath{\partial}_tu_{\varepsilon}$ satisfies an initial value problem for the partial differential operator $P_\varepsilon$ as in (\ref{Peps}) with initial data $\tilde{f}_{1,\varepsilon}$, $\tilde{f}_{2,\varepsilon}$ and right-hand side $\tilde{h}_{\varepsilon}$ instead. However, this time we have to use $V = H^2((0,1))$ (replacing $H^2_0((0,1))$) and $H = L^2(0,1)$ in the abstract setting, which still can serve to define a Gelfand triple $V\hookrightarrow H \hookrightarrow V'$ (cf.\ \cite[Theorem 17.4(b)]{Wloka}) and thus allows for application of Lemma \ref{lemma:m-a} and the energy estimate (\ref{eq:EE}) (with precisely the same constants). Therefore we obtain $\ensuremath{\partial}_tu_{\varepsilon}\in H^1([0,T],H^2((0,1)))$, i.e.\ $u_{\varepsilon}\in H^2((0,T),H^2((0,1)))$ and from the variants of (\ref{eq:EEeps1}) (with exactly the same constants $D_T^\varepsilon$ and $F_T^\varepsilon$) and (\ref{EEeps}) with $\ensuremath{\partial}_t u_\varepsilon$ in place of $u_\varepsilon$ that for some $M$ we have \begin{equation} \| \ensuremath{\partial}_t u_{\varepsilon}\|^2_{L^2(X_T)} + \|\ensuremath{\partial}_x \ensuremath{\partial}_t u_{\varepsilon}\|^2_{L^2(X_T)} + \| \ensuremath{\partial}_x^2 \ensuremath{\partial}_t u_{\varepsilon}\|^2_{L^2(X_T)} + \|\ensuremath{\partial}_t^2 u_{\varepsilon}\|^2_{L^2(X_T)} = O (\varepsilon^{-M})\quad (\varepsilon\to 0). \end{equation} Thus we have proved (\ref{T_lk}) with $(l,k) = (2,0), (1,1), (1,2)$ in addition to those obtained from (\ref{EEeps}) directly. The remaining part of the proof of property 1.) requires the exact same kind of adaptions in the corresponding parts in Step 1 of the proof of \cite[Th.\ 3.1]{HoermannOparnica09} and we skip its details here. In particular, along the way one also obtains that \begin{center} ($T_{l,k}$) holds for derivatives of arbitrary $l$ and $k \leq 2$. \end{center} \noindent\emph{Proof of 2.)} From the estimates achieved in proving 1.) and equation (\ref{Peps}) we deduce that $$ k_\varepsilon := \ensuremath{\partial}_x^2(c_\varepsilon\, \ensuremath{\partial}_x^2 u_\varepsilon) = h_\varepsilon - b_\varepsilon\, \ensuremath{\partial}_x^2 u_\varepsilon - \ensuremath{\partial}_t^2 u_\varepsilon - L_\varepsilon u_\varepsilon $$ satisfies for all $l \in \mathbb N$ with some $N_l$ an estimate \begin{equation}\label{mod22} \| \ensuremath{\partial}_t^l k_\varepsilon \|_{L^2(X_T)} = O(\varepsilon^{-N_l}) \qquad (\varepsilon\to 0). \end{equation} Here we are again in the same situation as in Step 2 of the proof of \cite[Theorem 3.1]{HoermannOparnica09}, where now $k_\varepsilon$ plays the role of $h_\varepsilon$ there. Skipping again details of completely analogous arguments we arrive at the conclusion that the class of $(u_\varepsilon)_\varepsilon$ defines a solution to the initial value problem. Moreover, we have by construction that $u_\varepsilon(t) \in H^2_0((0,1))$ for all $t \in [0,T]$, hence $u(0,t) = u(1,t) = 0$ and $\ensuremath{\partial}_x u(0,t) = \ensuremath{\partial}_x u(1,t) = 0$ and thus $u$ also satisfies the boundary conditions.
4,046
31,103
en
train
0.66.9
Therefore we obtain $\ensuremath{\partial}_tu_{\varepsilon}\in H^1([0,T],H^2((0,1)))$, i.e.\ $u_{\varepsilon}\in H^2((0,T),H^2((0,1)))$ and from the variants of (\ref{eq:EEeps1}) (with exactly the same constants $D_T^\varepsilon$ and $F_T^\varepsilon$) and (\ref{EEeps}) with $\ensuremath{\partial}_t u_\varepsilon$ in place of $u_\varepsilon$ that for some $M$ we have \begin{equation} \| \ensuremath{\partial}_t u_{\varepsilon}\|^2_{L^2(X_T)} + \|\ensuremath{\partial}_x \ensuremath{\partial}_t u_{\varepsilon}\|^2_{L^2(X_T)} + \| \ensuremath{\partial}_x^2 \ensuremath{\partial}_t u_{\varepsilon}\|^2_{L^2(X_T)} + \|\ensuremath{\partial}_t^2 u_{\varepsilon}\|^2_{L^2(X_T)} = O (\varepsilon^{-M})\quad (\varepsilon\to 0). \end{equation} Thus we have proved (\ref{T_lk}) with $(l,k) = (2,0), (1,1), (1,2)$ in addition to those obtained from (\ref{EEeps}) directly. The remaining part of the proof of property 1.) requires the exact same kind of adaptions in the corresponding parts in Step 1 of the proof of \cite[Th.\ 3.1]{HoermannOparnica09} and we skip its details here. In particular, along the way one also obtains that \begin{center} ($T_{l,k}$) holds for derivatives of arbitrary $l$ and $k \leq 2$. \end{center} \noindent\emph{Proof of 2.)} From the estimates achieved in proving 1.) and equation (\ref{Peps}) we deduce that $$ k_\varepsilon := \ensuremath{\partial}_x^2(c_\varepsilon\, \ensuremath{\partial}_x^2 u_\varepsilon) = h_\varepsilon - b_\varepsilon\, \ensuremath{\partial}_x^2 u_\varepsilon - \ensuremath{\partial}_t^2 u_\varepsilon - L_\varepsilon u_\varepsilon $$ satisfies for all $l \in \mathbb N$ with some $N_l$ an estimate \begin{equation}\label{mod22} \| \ensuremath{\partial}_t^l k_\varepsilon \|_{L^2(X_T)} = O(\varepsilon^{-N_l}) \qquad (\varepsilon\to 0). \end{equation} Here we are again in the same situation as in Step 2 of the proof of \cite[Theorem 3.1]{HoermannOparnica09}, where now $k_\varepsilon$ plays the role of $h_\varepsilon$ there. Skipping again details of completely analogous arguments we arrive at the conclusion that the class of $(u_\varepsilon)_\varepsilon$ defines a solution to the initial value problem. Moreover, we have by construction that $u_\varepsilon(t) \in H^2_0((0,1))$ for all $t \in [0,T]$, hence $u(0,t) = u(1,t) = 0$ and $\ensuremath{\partial}_x u(0,t) = \ensuremath{\partial}_x u(1,t) = 0$ and thus $u$ also satisfies the boundary conditions. {\bf Uniqueness:}\enspace \enspace If $u = [(u_\varepsilon)_\varepsilon]$ satisfies initial-boundary value problem with zero initial values and right-hand side, then we have for all $q\geq 0$ $$ \|f_{1,\varepsilon}\| = O(\varepsilon^q), \quad \|f_{2,\varepsilon}\| = O(\varepsilon^q), \quad \|h_{\varepsilon}\|_{L^2(X_T)} = O(\varepsilon^q) \qquad \text{as } \varepsilon\to 0. $$ Therefore the energy estimate (\ref{eq:EEeps1}) in combination with (\ref{const_D_eps})-(\ref{const_F_eps}) imply for all $q \geq 0$ an estimate $$ \|u_\varepsilon\|_{L^2(X_T)} = O(\varepsilon^q) \quad (\varepsilon \to 0), $$ from which we conclude that $(u_\varepsilon)_\varepsilon \in \ensuremath{{\cal N}}_{H^{\infty}(X_T)}$, i.e., $u = 0$. \end{proof}
1,096
31,103
en
train
0.66.10
\end{document}
5
31,103
en
train
0.67.0
\begin{document} \title{Semigroups in Stable Structures} \author{Yatir Halevi\footnote{The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC Grant Agreement No. 291111.}} \date{} \maketitle \begin{abstract} Assume $G$ is a definable group in a stable structure $M$. Newelski showed that the semigroup $S_G(M)$ of complete types concentrated on $G$ is an inverse limit of the $\infty$-definable (in $M^{eq}$) semigroups $S_{G,\Delta}(M)$. He also shows that it is strongly $\pi$-regular: for every $p\in S_{G,\Delta}(M)$ there exists $n\in\mathbb{N}$ such that $p^n$ is in a subgroup of $S_{G,\Delta}(M)$. We show that $S_{G,\Delta}(M)$ is in fact an intersection of definable semigroups, so $S_G(M)$ is an inverse limit of definable semigroups and that the latter property is enjoyed by all $\infty$-definable semigroups in stable structures. \end{abstract} \section{Introduction} A Semigroup is a set together with an associative binary operation. Although the study of semigroups stems in the start of the 20th century not much attention has been given to semigroups in stable structures. One of the only facts known about them is \begin{proposition*}{\cite{unidim}} A stable semigroup with left and right cancellation, or with left cancellation and right identity, is a group. \end{proposition*} Recently $\infty$-definable semigroups in stable structures made an appearance in a paper by Newelski \cite{Newelski-stable-groups}: Let $G$ be a definable group inside a stable structure $M$. Define $S_G(M)$ to be all the types of $S(M)$ which are concentrated on $G$. $S_G(M)$ may be given a structure of a semigroup by defining for $p,q\in S_G(M)$: \[p\cdot q=tp(a\cdot b/M),\] where $a\models p, b\models q$ and $a\forkindep[M]b$. Newelski gives an interpretation of $S_{G,\Delta}(M)$ (where $\Delta$ is a finite set of invariant formulae) as an $\infty$-definable set in $M^{eq}$ and thus $S_G(M)$ may be interpreted as an inverse limit of $\infty$-definable semigroups in $M^{eq}$. As a result he shows that for every local type $p\in S_{G,\Delta}(M)$ there exists an $n\in\mathbb{N}$ such that $p^n$ is in a subgroup of $S_{G,\Delta}(M)$. In fact he shows that $p^n$ is equal to a translate of a $\Delta$-generic of a $\Delta$-definable connected subgroup of $G(M)$. \begin{definition*} A semigroup $S$ is called \emph{strongly $\pi$-regular} or an \emph{epigroup} if for all $a\in S$ there exists $n\in\mathbb{N}$ such that $a^n$ is in a subgroup of $S$. \end{definition*} \begin{question} Is this property enjoyed by all $\infty$-definable semigroups in stable structures? \end{question} Since we're dealing with $\infty$-definable semigroups, remembering that every $\infty$-definable group in a stable structure is an intersection of definable groups, an analogous questions arises: \begin{question} Is every $\infty$-definable semigroup in a stable structure an intersection of definable ones? Is $S_{G,\Delta}(M)$ an intersection of definable semigroups? \end{question} In this paper we answer both theses questions. It is a classical result about affine algebraic semigroups that they are strongly $\pi$-regular. Recently Brion and Renner \cite{RenBrio} proved that this is true for all Algebraic Semigroups. In fact, we'll show that \begin{propff}{T:inf-def-is-spr} Let $S$ be an $\infty$-definable semigroup inside a stable structure. Then $S$ is strongly $\pi$-regular. \end{propff} At least in the definable case, this is a direct consequence of stability, the general case is not harder but a bit more technical. One can ask if what happens in $S_{G,\Delta}(M)$ is true in general $\infty$-definable semigroups. That is, is every element a power away from a translation of an idempotent. However, this is already is not true in $M_2(\mathbb{C})$. As for the second question, in Section \ref{sec:S_G(M)} we show that $S_{G,\Delta}(M)$ is an intersection of definable semigroups. In fact, \begin{theoremff}{T:invlim} $S_G(M)$ is an inverse limit of definable semigroups in $M^{eq}$. \end{theoremff} Unfortunately not all $\infty$-definable semigroups are an intersection of definable ones. Milliet showed that every $\infty$-definable semigroup inside a small structure is an intersection of definable semigroups \cite{on_enveloping}. In particular this is true for $\omega$-stable structures, and so for instance in $ACF$. Already in the superstable case this is not true in general, see \Cref{E:counter-exam}. However there are some classes of semigroups in which this does hold. We recall some basic definitions from semigroup theory we'll need. See \Cref{ss:semigroups} for more information. \begin{definition*} \begin{enumerate} \item An element $e\in S$ in a semigroup $S$ is an \emph{idempotent} if $e^2=e$. \item A semigroup $S$ is called an \emph{inverse semigroup} if for every $a\in S$ there exists a unique $a^{-1}\in S$ such that \[aa^{-1}a=a,\quad a^{-1}aa^{-1}=a^{-1}.\] \item A \emph{Clifford semigroup} is an inverse semigroup in which the idempotents are central. A \emph{surjective Clifford monoid} is a Clifford monoid in which for every $a\in S$ there exists $g\in G$ and idempotent $e$ such that $a=ge$, where $G$ is the unit group of $S$. \end{enumerate} \end{definition*} These kinds of semigroups do arise in the context of $S_G(M)$. It is probably folklore, but one may show (see \Cref{ss:S_G inverse}) that if $G$ is $1$-based then $S_G(M)$ is an inverse monoid. In \Cref{ss:S_G inverse} we give a condition on $G$ for $S_G(M)$ to be Clifford. \begin{theoremff}{T:Cliff_with_surj} Let $S$ be an $\infty$-definable surjective Clifford monoid in a stable structure. Then $S$ is contained in a definable monoid, extending the multiplication on $S$. This monoid is also a surjective Clifford monoid. \end{theoremff} As a result from the proof, every such monoid is an intersection of definable ones. In the process of proving the above Theorem we show two results which might be interesting in their own right. Since $\infty$-definable semigroups in stable structures are s$\pi$r, one may define a partial order on them given by \[a\leq b \Leftrightarrow a=be=fb \text{ for some } e,f\in E(S^1),\] where $S^1$ is $S\cup \{1\}$ where we define $1$ to be the identity element. If for every $a,b\in S$, $a\cdot b\leq a,b$ one may show that there exists $n\in\mathbb{N}$ such that every product of $n+1$ elements is already a product of $n$ of them (\Cref{P:in negative_order_n+1_is_n}). As a result any such semigroup is an intersection of definable ones. In particular, \begin{propff}{P:idemp_is_inside_definable} Let $E$ be an $\infty$-definable commutative idempotent semigroup inside a stable structure, then $E$ is contained in a definable commutative idempotent semigroup. Furthermore, it is an intersection of definable ones. \end{propff}
2,178
17,974
en
train
0.67.1
\section{Preliminaries}\label{S:Preliminaries} \subsection{Notations} We fix some notations. We'll usually not distinguish between singletons and sequences thus we may write $a\in M$ and actually mean $a=(a_1,\dots,a_n)\in M^n$, unless a distinguishment is necessary. $A,B,C,\dots$ will denote parameter sets and $M,N,\dots$ will denote models. When talking specifically about semigroups, monoids and groups (either definable, $\infty$-definable or models) we'll denote them by $S$, $M$ and $G$, respectively. We use juxtaposition $ab$ for concatenation of sequences, or $AB$ for $A\cup B$ if dealing with sets. That being said, since we will be dealing with semigroups, when there is a chance of confusion we'll try to differentiate between the concatenation $ab$ and the semigroup multiplication $ab$ by denoting the latter by $a\cdot b$. \subsection{Semigroups}\label{ss:semigroups} Clifford and Preston \cite{Clifford1,Clifford2} is still a very good reference for the theory of semigroups, but Higgins \cite{Higgins} and Howie \cite{Howie} are much more recent sources. A set $S$ with an associative binary operation is called a \emph{semigroup}.\\ An element $e\in S$ is an \emph{idempotent} if $e^2=e$. We'll denote by $E(S)$ the subset of all idempotents of $S$.\\ By a subgroup of $S$ we mean a subsemigroup $G\subseteq S$ such that there exists an idempotent $e\in G$ such that $(G,\cdot)$ is a group with neutral element $e$.\\ $S$ is \emph{strongly $\pi$-regular} (s$\pi$r) if for each $a\in S$ there exits $n>0$ such that $a^n$ lies in a subgroup of $S$. \begin{remark} These type of semigroups are also known as \emph{epigroups} and their elements as \emph{group-bound}. \end{remark} A semigroup with an identity element is called a \emph{monoid}. Notice that any semigroup can be extended to a monoid by artificially adding an identity element. We'll denote it by $S^1$. If $S$ is a monoid we'll denote by $G(S)$ its subgroup of invertible elements.\\ Given two semigroups $S,S^\prime$, a \emph{homomorphism of semigroups} is map \[\varphi :S\to S^\prime\] such that $\varphi(xy)=\varphi(x)\varphi(y)$ for all $x,y\in S$. If $S,S^\prime$ are monoids, then we say $\varphi$ is a \emph{homomorphism of monoids} if in addition $\varphi(1_S)=1_{S^\prime}$.\\ \begin{definition} The \emph{natural partial order} on $E(S)$ is defined by \[e\leq f \Leftrightarrow ef=fe=e.\] \end{definition} \begin{proposition}\cite[Section 1.7]{Clifford1}\label{P:Max.subgroups} For every $e,f\in E(S)$ we have the following: \begin{enumerate} \item $eSe$ is a subsemigroup of $S$. In fact, it is a monoid with identity element $e$; \item $eSe\subseteq fSf \Leftrightarrow e\leq f;$ \item Every maximal subgroup of $S$ is of the form $G(eSe)$ (the unit group of $eSe$) for $e\in E(S)$; \item If $e\neq f$ then $G(eSe)\cap G(fSf)=\emptyset$. \end{enumerate} \end{proposition} There are various ways to extend the partial order on the idempotents to a partial order on the entire semigroup. See \cite[Section 1.4]{Higgins} for a discussion about them. We'll use the \emph{natural partial order} on $S$. It has various equivalent definitions, we present the one given in \cite[Proposition 1.4.3]{Higgins}. \begin{definition}\label{D:general_def_of_order} The relation \[a\leq b \Leftrightarrow a=xb=by,xa=a \text{ for some } x,y\in S^1\] is called the \emph{natural partial order} on $S$. \end{definition} Notice that this extends the partial order on $E(S)$. If $S$ is s$\pi$r this partial order takes a more elegant form: \begin{proposition}\cite[Corollary 1.4.6]{Higgins} On s$\pi$r semigroups there is a natural partial order extending the order on $E(S)$: \[a\leq b \Leftrightarrow a=be=fb \text{ for some } e,f\in E(S^1).\] \end{proposition} \subsubsection{Clifford and Inverse Semigroups}\label{subsec:Clifford} \begin{definition} A semigroup $S$ is called $regular$ if for every $a\in S$ there exists at least one element $b\in S$ such that \[aba=a,\quad bab=b.\] Such an element $b$ is a called a \emph{pseudo-inverse} of $a$ \end{definition} \begin{definition} A semigroup $S$ is called an \emph{inverse semigroup} if for every $a\in S$ there exists a unique $a^{-1}\in S$ such that \[aa^{-1}a=a,\quad a^{-1}aa^{-1}=a^{-1}.\] \end{definition} Basic facts about inverse semigroups: \begin{proposition}\cite[Section V.1, Theorem 1.2 and Proposition 1.4]{Howie} Let $S$ be an inverse semigroup. \begin{enumerate} \item For every $a,b\in S$, $\left( a^{-1}\right)^{-1}=a$ and $\left( ab\right)^{-1}=b^{-1}a^{-1}.$ \item For every $a\in S$, $aa^{-1}$ and $a^{-1}a$ are idempotents. \item The idempotents commute. Thus $E(S)$ is a commutative subsemigroup, and hence a semilattice. \end{enumerate} \end{proposition} The basic example for inverse semigroups is the set, $\mathscr{I}(X)$, of partial one-to-one mappings for a set $X$, that means that the domain is a (possibly empty) subset of $X$. The composition of two "incompatible" mappings will be the empty mapping. The first surprising fact is that this is in fact an \emph{inverse} semigroup, but one can say even more (a generalization of Cayley's theorem for groups): \begin{theorem}[The Vagner-Preston Representation Theorem]\cite[Section V.1, Theorem 1.10]{Howie} If $S$ is an inverse semigroup then there exists a set $X$ and a monomorphism $\phi: S\to \mathscr{I}(X)$. \end{theorem} If $S$ is an inverse semigroup, the partial order on $S$ gets the following form: $a\leq b$ if there exists $e\in E(S)$ such that $a=eb$. \begin{proposition} \begin{enumerate} \item $\leq$ is a partial order relation. \item If $a,b,c\in S$ such that $a\leq b$ then $ac\leq bc$ and $ca\leq cb$. Futhermore, $a^{-1}\leq b^{-1}$. \end{enumerate} \end{proposition} \begin{definition} A \emph{Clifford semigroup} is an inverse semigroup in which the idempotents are central. \end{definition} \begin{remark} Different sources give different, but equivalent, definitions of a Clifford semigroup. For instance Howie defines a Clifford semigroup to be a regular semigroup $S$ in which the idempotents are central \cite[Section IV.2]{Howie}. One may show that $S$ is an inverse semigroup if and only if it is regular and the idempotents commute \cite[Section V.1, Theorem 1.2]{Howie}, so the definitions coincide. \end{remark} The following is well known, but we'll add a proof instead of adding another source. \begin{proposition} $S$ is a Clifford semigroup if and only if it is an inverse semigroup and $aa^{-1}=a^{-1}a$ for all $a\in S$. \end{proposition} \begin{proof} Assume $S$ is a Clifford semigroup and let $a\in S$. Since $aa^{-1}$ and $a^{-1}a$ are idempotents and central, \[aa^{-1}=a(a^{-1}a)a^{-1}=(a^{-1}a)aa^{-1}=a^{-1}a(aa^{-1})=a^{-1}(aa^{-1})a=a^{-1}a.\] Conversely, we must show that the idempotents are central. For $a\in S$ and $e\in E(S)$, we'll show that $ea=(ea)(a^{-1}e)(ea)=(ea)(ea^{-1})(ea)$ and thus by the uniqueness of the pseudo-inverses, $ea=ae$. By our assumption \[(ea)(ea^{-1})(ea)=eae(a^{-1}e)(ea)=eae(ea)(a^{-1}e)=eaeaa^{-1}e\] Again, $(ea)(a^{-1}e)=(a^{-1}e)(ea)$ so \[=eaa^{-1}eea=eaa^{-1}ea,\] and by the commutativity of the idempotents ($e$ and $aa^{-1}$), \[=eaa^{-1}a=ea.\] \end{proof} \begin{definition}\cite[Chapter IV]{Howie} A semigroup $S$ is said to be a \emph{strong semilattice of semigroups} if there exists a semilattice $Y$, disjoint subsemigroups $\{S_{\alpha}:\alpha\in Y\}$ and homomorphisms $\{\phi_{\alpha,\beta}:S_\alpha\to S_\beta:\alpha,\beta\in Y, \alpha\geq \beta\}$ such that \begin{enumerate} \item $S=\bigcup_\alpha S_\alpha$. \item $\phi_{\alpha,\alpha}$ is the identity. \item For every $\alpha\geq \beta\geq\gamma$ in $Y$, $\phi_{\beta,\gamma}\phi_{\alpha,\beta}=\phi_{\alpha,\gamma}$. \end{enumerate} \end{definition} \begin{theorem}\cite[Section IV.2, Theorem 2.1]{Howie} $S$ is a Clifford semigroup if and only if it is a strong semilattice of groups. The semilattice is $E(S)$, the disjoint groups are \[\{G_e=G(eSe): e\in E(S)\}\] the maximal subgroups of $S$ and the homomorphism $\phi_{e,ef}$ is given by multiplication by $f$. \end{theorem}
2,825
17,974
en
train
0.67.2
\section{$\infty$-definable Semigroups and Monoids}\label{S:big-wedge-defin} Let $S$ be an $\infty$-definable semigroup in a stable structure. Assume that $S$ is defined by \[\bigwedge_i \varphi_i(x).\] \begin{remark} We assume that $S$ is defined over $\emptyset$ just for notational convenience. Moreover we assume that the $\varphi_i$s are closed under finite conjunctions. \end{remark}
122
17,974
en
train
0.67.3
\subsection{Strongly $\pi$-regular} Our goal is to prove that an $\infty$-definable semigroup inside a stable structure is s$\pi$r. To better understand what's going on we start with an easier case: \begin{definition} A \emph{stable semigroup} is a stable structure $S$ such that there is a definable binary function $\cdot$ which makes $(S,\cdot)$ into a semigroup. \end{definition} The following was already noticed in \cite{LoseyHans} for semigroups with chain conditions, but we give it in a "stable semigroup" setting. \begin{proposition}\label{P:idem} Any stable semigroup has an idempotent. \end{proposition} \begin{proof} Let $a\in S$ and let \[\theta(x,y)= \exists u (u\cdot a=a\cdot u\wedge u\cdot x=y).\] Obviously, $S\models \theta(a^{3^m},a^{3^n})$ for $m<n$. $S$ is stable hence $\theta$ doesn't have the order property. Thus there exists $m<n$ such that $S\models \theta(a^{3^n},a^{3^m})$. Let $C\in S$ be such that $C\cdot a^{3^n}=a^{3^m}$ and commutes with $a$. Since $3^n> 2\cdot3^m$ then multiplying by $a^{3^n-2\cdot 3^m}$ yields $Ca^{2(3^n-3^m)}=a^{3^n-3^m}$. Notice that since $C$ commutes with $a$, $Ca^{3^n-3^m}$ is an idempotent. \end{proof} \begin{proposition}\label{P:s pi r} Any stable semigroup is $s\pi r$. \end{proposition} \begin{proof} Let $a\in S$. From the proof of \Cref{P:idem} there exists $C\in S$ that commutes with $a$ and $n>0$ such that $Ca^{2n}=a^n$. Set $e:=Ca^n$. Indeed, $a^n=e\cdot a^n\cdot e$ and $a^n\cdot eCe=e$. \end{proof} \begin{remark} Given $a\in S$, there exists a unique idempotent $e=e_a\in S$ such that $a^n$ belongs to the unit group of $eSe$ for some $n>0$. Indeed, for two idempotents $e\neq f$ the unit groups of $eSe$ and $fSf$ are disjoint (Proposition \ref{P:Max.subgroups}). \end{remark} Furthermore, we have \begin{lemma}\cite{Munn}\label{L:higher-power-in-max-subgrp} Let $S$ be a semigroup and $x\in S$. If for some $n$, $x^n$ lies in a subgroup of $S$ with identity $e$ then $x^m$ lies in the unit group of $eSe$ for all $m\geq n$. \end{lemma} \begin{corollary}\label{C:universal n} There exists $n>0$ (depending only on $S$) such that for all $a\in S$, $a^n$ belongs to the unit group of $e_aSe_a$. \end{corollary} \begin{proof} Let $\phi_i(x)$ be the formula '$x^i\in \text{ the unit group of } e_xSe_x$'. $\bigcup_i[\phi(x)]=S_1(S)$, since every elementary extension of $S$ is also stable and hence $s\pi r$. By compactness there exist $n_1,\dots,n_k>0$ such that $S_1(S)=[\phi_{n_1}\vee\dots\vee \phi_{n_k}]$. $n=n_1\cdots n_k$ is our desired integer. \end{proof} We return to the general case of $S$ being an $\infty$-definable semigroup inside a stable structure. The following is an easy consequence of stability: \begin{proposition} \label{P:chain of idempotents} Every chain of idempotents in $S$, with respect to the partial order on them, is finite and uniformly bounded. \end{proposition} Our goal is to show that for every $a\in S$ there exists an idempotent $e\in S$ and $n\in\mathbb{N}$ such that $a^n$ is in the unit group of $eSe$. We'll want to assume that $S$ is a conjunction of countably many formulae. For that we'll need to make some observations, the following is well known but we add a proof for completion, \begin{lemma} Let $S$ be an $\infty$-definable semigroup. Then there exist $\infty$-definable semigroups $H_i$ such that each $H_i$ is defined by at most a countable set of formulae, and $S=\bigcap H_i$. \end{lemma} \begin{proof} Let $S=\bigwedge_{i\in I} \varphi_i$ and assume that the $\varphi_i$s are closed under finite conjunctions. By compactness we may assume that for all $i$ and $x,y,z$ $$\varphi_i(x)\wedge\varphi_i(y)\wedge\varphi_i(z)\to (xy)z=x(yz).$$ Let $i^0\in I$. By compactness there exists $i^0_1\in I$ such that for all $x,y$: $$\varphi_{i^0_1}(x)\wedge \varphi_{i^0_1}(y)\to \varphi_{i^0}(xy).$$ Thus construct a sequence $i^0,i^0_1,i^0_2,\dots$ and define $$H_{i^0}=\bigwedge_j \varphi_{i^0_j}.$$ This is indeed a semigroup and $$S=\bigcap_{i\in I} H_i.$$ \end{proof} The following is also well known, \begin{proposition}{\cite{unidim}} An $\infty$-definable semigroup in a stable structure with left and right cancellation, or with left cancellation and right identity, is a group. \end{proposition} As a consequence, \begin{lemma}\label{L:max_subgrps_are_inf_definable} Let $S$ be an $\infty$-definable semigroup and $G_e\subseteq S$ a maximal subgroup (with idempotent $e\in E(S)$). $G_e$ is relatively definable in $S$. \end{lemma} \begin{proof} Let $S=\bigwedge_i \varphi_i(x)$. By compactness, there exists a definable set $S\subseteq S_0$ such that for all $x,y,z\in S_0$ \[x(yz)=(xy)z.\] Let $G_e(x)$ be \[\bigwedge_i\varphi_i(x)\wedge (xe=ex=x)\wedge \bigwedge_i(\exists y\in S_0)(\varphi_i(y)\wedge ye=ey=y\wedge yx=xy=e).\] This $\infty$-formula defines the maximal subgroup $G_e$. Indeed if $a\models G_e(x)$ and $b,b^\prime\in S_0$ are such that \[\varphi_i(b)\wedge be=eb=b\wedge ba=ab=e\] and \[\varphi_j(b^\prime)\wedge b^\prime e=eb^\prime=b^\prime\wedge b^\prime a=ab^\prime=e,\] then \[b^\prime=b^\prime e=b^\prime(ab)=(b^\prime a)b=eb=b.\] Hence there exists an inverse of $a$ in $S$. Let $G_e\subseteq G_0$ be a definable group containing $G_e$ (see \cite{unidim}). $G_0\cap S$ is an $\infty$-definable subsemigroup of $S$ with cancellation, hence a subgroup. It is thus contained in the maximal subgroup $G_e$ and so equal to it. \end{proof} \begin{lemma} Let $S$ be an $\infty$-definable semigroup and $S\subseteq S_1$ an $\infty$-definable semigroup containing it. If $S_1$ is s$\pi$r then so is $S$. \end{lemma} \begin{proof} Let $a\in S$ and let $a^n\in G_e\subseteq S_1$ where $G_e$ is a maximal subgroup of $S_1$. Thus $a^n\in G_e\cap S$. Since $G_e\cap S$ is an $\infty$-definable subsemigroup of $S$ with cancellation, it is a subgroup. \end{proof} We may, thus, assume that $S$ is the conjunction of countably many formulae. Furthermore, we may, and will, assume that $S$ is commutative. Indeed, let $a\in S$. By compactness we may find a definable set $S\subseteq S_0$ such that for all $x,y,z\in S_0$: \[x(yz)=(xy)z.\] Define $D_1=\{x\in S_0: xa=ax\}$ and then \[D_2=\{x\in S: (\forall c\in D_1)\: xc=cx\}.\] $D_2$ is an $\infty$-definable commutative subsemigroup of $S$ with $a\in D_2$. \begin{lemma}\label{L:technical-stuff-for-spr} There exist definable sets $S_i$ such that $S=\bigcap S_i$, the multiplication on $S_i$ is commutative and that for all $1<i$ there exists $C_i\in S_i$ and $n_i,m_i\in \mathbb{N}$ such that \begin{enumerate} \item $n_i>2m_i$; \item $e_i:=C_ia^{n_i-m_i}$ is an idempotent; and furthermore for all $1<j\leq i$: \item $n_j-m_j\leq n_i-m_i$; \item $e_je_i=e_i$; \item $e_ia^{n_i-m_i}=a^{n_i-m_i}$. \end{enumerate} \end{lemma} \begin{proof} By compactness we may assume that $S=\bigcap S_i$, where \[S_0\supseteq S_1\supseteq S_2\supseteq \dots\] are definable sets such that for all $i>1$ we are allowed to multiply associatively and commutatively $\leq 20$ elements of $S_i$ and get an element of $S_{i-1}$. Let $i>1$ and let $\theta(x,y)$ be \[\exists u\in S_i \: ux=y.\] Obviously, $\models \theta(a^{3^k},a^{3^l})$ for $k<l$. By stability $\theta$ doesn't have the order property. Thus there exist $k<l$ such that $\models \theta(a^{3^l},a^{3^k})$. Let $C_i\in S_i$ be such that $C_ia^{3^l}=a^{3^k}$. Since $l>k$ we have $3^l> 2\cdot 3^k$ (this gives 1). Let $n_i=3^l$ and $m_i=3^k$. $e_i:=C_ia^{n_i-m_i}\in S_{i-1}$ is an idempotent (this gives 2), for that first notice that: \[C_ia^{2n_i-2m_i}=C_ia^{n_i}a^{n_i-2m_i}=a^{m_i}a^{n_i-2m_i}=a^{n_i-m_i}.\] Hence, \[(C_ia^{n_i-m_i})(C_ia^{n_i-m_i})=C_i^2a^{2n_i-2m_i}=C_ia^{n_i-m_i}.\] We may take $n_i-m_i$ to be minimal, but then since $S_i\subseteq S_j$ for $j<i$ we have $n_j-m_j\leq n_i-m_i$ (this gives 3). As for 4, if $1<j<i$: \[e_ie_j=C_ia^{n_i-m_i}C_ja^{n_j-m_j}=C_ia^{n_i-m_i+m_j}C_ja^{n_j-2m_j}\] but $n_i-m_i+m_j\geq n_j$, so \[C_ia^{n_i-m_i+m_j-n_j}C_ja^{2n_j-2m_j}=C_ia^{n_i-m_i+m_j-n_j}a^{n_j-m_j}=e_i.\] 5 follows quite similarly to what we've done. \end{proof} \begin{proposition}\label{T:inf-def-is-spr} Let $S$ be an $\infty$-definable semigroup inside a stable structure. Then $S$ is strongly $\pi$-regular. \end{proposition} \begin{proof} Let $a\in S$. For all $i>1$ let $S_i, C_i, n_i$ and $m_i$ be as in Lemma \ref{L:technical-stuff-for-spr}. Set $k_i=n_i-m_i$ and \[e_{i-1}=C_ia^{k_i} \text{ and } \beta_{i-1}=e_iC_ie_i,\] notice that these are both elements of $S_{i-1}$ (explaining the sub-index). By Lemma \ref{L:technical-stuff-for-spr}{(4)} we get a descending sequence of idempotents \[e_1\geq e_2\geq \dots,\] with respect to the partial order on the idempotents. By stability it must stabilize. Thus we may assume that $e:=e_1=e_2=\dots$ and is an element of $S$. Moreover, for all $i>1$ \[\beta_1=\beta_1\cdot e=\beta_1a^{k_{i+1}}\cdot \beta_i=e\cdot a^{k_{i+1}-k_2}\beta_i.\] So \[\beta_1=e\cdot a^{k_{i+1}-k_2}eC_{i+1}e\] which is a product of $\leq 20$ elements of $S_{i+1}$ and thus $\in S_i$. Also $\beta_1\in S$. In conclusion, by setting $k:=k_2$ and $\beta:=\beta_1$, \[a^ke=ea^k=a^k, \text{ } a^k\beta=\beta a^k=e \text{ and } \beta e=e\beta=\beta.\] So $a^k$ is in the unit group of $eSe$. \end{proof} \begin{corollary} There exists an $n\in\mathbb{N}$ such that for all $a\in S$, $a^n$ is an element of a subgroup of $S$. \end{corollary} \begin{proof} Compactness. \end{proof} \begin{corollary} $S$ has an idempotent. \end{corollary} \begin{remark} In the notations of \Cref{sec:S_G(M)}, Newelski showed in \cite{Newelski-stable-groups} that $S_{G,\Delta}(M)$ is an $\infty$-definable semigroup in $M^{eq}$ and that it is s$\pi$r. \Cref{T:inf-def-is-spr}, thus, gives another proof. \end{remark}
4,077
17,974
en
train
0.67.4
\subsection{A Counter Example} It is known that every $\infty$-definable group inside a stable structure is an intersection of definable ones. It would be even better if every such semigroup were an intersection of definable semigroups. Milliet showed that every $\infty$-definable semigroup inside a small structure is an intersection of definable semigroups \cite{on_enveloping}. In particular this is true for $\omega$-stable structures. So, for instance, any $\infty$-definable subsemigroup of $M_n(k)$ for $k\models ACF$ is an intersection of definable semigroups. Unfortunately, this is not true already in the superstable case, as the following example will show. \begin{example}\label{E:counter-exam} Pillay and Poizat give an example of an $\infty$-definable equivalence relation which is not an intersection of definable ones \cite{Pillay-Poizat}. This will give us our desired semigroup structure. Consider the theory of a model which consists of universe $\mathbb{Q}$ (the rationals) with the unary predicates: \[U_a=\{x\in\mathbb{Q}: x\leq a\}\] for $a\in\mathbb{Q}$. The equivalence relation, $E$, is defined by \[\bigwedge_{a<b} ((U_a(x)\to U_b(y))\wedge (U_a(y)\to U_b(x)).\] It is an equivalence relation and in particular a preorder (reflexive and transitive). Notice that it also follows that $E$ can't be an intersection of definable preorders. For if $E=\bigwedge R_i$ (for preorders $R_i$) then we also have \[E=\bigwedge (R_i\wedge\overline{R}_i)\] where $x\overline{R}_iy=yR_ix$ (since $E$ is symmetric). But $R_i\wedge\overline{R}_i$ is a definable equivalence relation (the symmetric closure) hence trivial. So the $R_i$ are trivial. Milliet showed that in an arbitrary structure, every $\infty$-definable semigroup is an intersection of definable semigroups if and only if this is true for all $\infty$-definable preorders \cite{on_enveloping}. As a consequence in the above structure we can define an $\infty$-definable semigroup which will serve as a counter-example. Specifically, it will be the following semigroup: If the preorder is on a set $X$, add a new element $0$ and add $0R0$ to the preorder. Define a semigroup multiplication on $R$: \begin{equation*} (a,b)\cdot (c,d) = \begin{cases} (a,d) & \text{if }b=c,\\ (0,0) & \text{else } \end{cases} \end{equation*} \begin{remark} This example also shows that even "presumably well behaved" $\infty$-definable semigroups need not be an intersection of definable ones. In the example at hand the maximal subgroups are uniformly definable (each of them is finite) and the idempotents form a commutative semigroup. \end{remark} \end{example} \subsection{Semigroups with Negative Partial Order}\label{Subsec:inf-def-with-neg-part} We showed in \Cref{T:inf-def-is-spr} that every $\infty$-definable semigroup in a stable structure is strongly $\pi$-regular, hence the natural partial order on it has the following form: For any $a,b\in S$, $a\leq b$ if there exists $f,e\in E(S^1)$ such that $a=be=fb$. \begin{remark} Notice that this order generalizes the order on the idempotents. \end{remark} In a similar manner to what was done with the order of the idempotents, we have the following: \begin{proposition} $ $ \begin{enumerate}[(i)] \item Every chain of elements with regard to the natural partial order is finite. \item By compactness, the length of the chains is bounded. \end{enumerate} \end{proposition} \begin{definition} We'll say that a semigroup $S$ is \emph{negatively ordered} with respect to the partial order if \[a\cdot b\leq a,b\] for all $a,b\in S$. \end{definition} \begin{example} A commutative idempotent semigroup (a (inf)-semilattice) is negatively ordered. \end{example} Negativily ordered semigroups were studied by Maia and Mitsch \cite{NegPart}. We'll only need the definition. \begin{proposition}\label{P:in negative_order_n+1_is_n} Let $S$ be a negatively ordered semigroup. Assume that the length of chains is bounded by $n$, then any product of $n+1$ elements is a product of $n$ of them. \end{proposition} \begin{proof} Let $a_1\cdot\dotsc\cdot a_{n+1}\in S$. Since $S$ is negatively ordered, \[a_1\cdot \dotsc \cdot a_{n+1}\leq a_1\cdot\dotsc\cdot a_n\leq\dots\leq a_1\cdot a_2\leq a_1.\] Since $n$ bounds the length of chains, we must have \[a_1\cdot\dotsc\cdot a_i=a_1\cdot\dotsc\cdot a_{i+1}\] for a certain $1\leq i\leq n$. \end{proof} This property is enough for an $\infty$-definable semigroup to be contained inside a definable one. \begin{proposition}\label{P:product_of_n+1_is_n_is_inside_def} Let $S$ be an $\infty$-definable semigroup (in any structure). If every product of $n+1$ elements in $S$ is a product of $n$ of them, then $S$ is contained inside a definable semigroup. Moreover, $S$ is an intersection of definable semigroups. \end{proposition} \begin{proof} Let $S\subseteq S_0$ be a definable set where the multiplication is defined. By compactness, there exists a definable subset $S\subseteq S_1\subseteq S_0$ such that \begin{itemize} \item Any product of $\leq 3n$ elements of $S_1$ is an element of $S_0$; \item Associativity holds for products of $\leq 3n$ elements of $S_1$; \item Any product of $n+1$ elements of $S_1$ is already a product of $n$ of them. \end{itemize} Let \[S_1\subseteq S_2=\{x\in S_0: \exists y_1,\dots,y_n\in S_1\quad \bigvee_{i=1}^n x=y_1\cdot \dotsc\cdot y_i\}.\] We claim that if $a\in S_1$ and $b\in S_2$ then $ab\in S_2$, indeed this follows from the properties of $S_1$. Define \[S_3=\{x\in S_2:xS_2\subseteq S_2\}.\] $S_3$ is our desired definable semigroup. \end{proof} As a consequence of these two Propositions, we have \begin{proposition} Every $\infty$-definable negatively ordered semigroup inside a stable structure is contained inside a definable semigroup. Furthermore, it is an intersection of definable semigroups. \end{proposition} Since every commutative idempotent semigroup is negatively ordered we have the following Corollary, \begin{corollary}\label{P:idemp_is_inside_definable} Let $E$ be an $\infty$-definable commutative idempotent semigroup inside a stable structure, then $E$ is contained in a definable commutative idempotent semigroup. Furthermore, it is an intersection of definable ones. \end{corollary} \begin{proof} We only need to show that the definable semigroup containing $E$ can be made commutative idempotent. For that to we need to demand that all the elements of $S_1$ (in the proof of \Cref{P:product_of_n+1_is_n_is_inside_def}) be idempotents and that they commute, but that can be satisfied by compactness. \end{proof} \subsection{Clifford Monoids}\label{Subsec:Cliff} We assume that $S$ is an $\infty$-definable Clifford semigroup (see \Cref{subsec:Clifford}) inside a stable structure. The simplest case of Clifford semigroups, commutative idempotent semigroups (semilattices) were considered in \Cref{Subsec:inf-def-with-neg-part}. Understanding the maximal subgroups of a semigroup is one of the first steps when one wishes to understand the semigroup itself. Lemma \ref{L:max_subgrps_are_inf_definable} is useful and will be used implicitly. Recall that every Clifford semigroup is a strong semilattice of groups. Between each two maximal subgroups, $G_e$ and $G_{ef}$ there exists a homomorphism $\phi_{e,ef}$ given by multiplication by $f$. \begin{definition} By a \emph{surjective Clifford monoid} we mean a Clifford monoid $M$ such that for every $a\in M$ there exist $g\in G(M)$ and $e\in E(M)$ such that $a=ge$. Surjectivity refers to the fact that these types of Clifford monoids are exactly the ones with $\phi_{e,ef}$ surjective. \end{definition} We restrict ourselves to $\infty$-definable surjective Clifford monoids. \begin{theorem}\label{T:Cliff_with_surj} Let $M$ be an $\infty$-definable surjective Clifford monoid in a stable structure. Then $M$ is contained in a definable monoid, extending the multiplication on $M$. This monoid is also a surjective Clifford monoid. Furthermore, every such monoid is an intersection of definable surjective Clifford monoids. \end{theorem} \begin{proof} Let $M\subseteq M_0$ be a definable set where the multiplication is defined. By compactness, there exists a definable subset $M\subseteq M_1\subseteq M_0$ such that \begin{itemize} \item Associativity holds for $\leq 6$ elements of $M_1$; \item Any product of $\leq 6$ elements of $M_1$ is in $M_0$; \item $1$ is a neutral element of $M_1$; \item If $x$ and $y$ are elements of $M_1$ with $y$ an idempotent then $xy=yx$. \end{itemize} By the standard argument for stable groups, there exists a definable group \[G_1\subseteq G\subseteq M_1,\] where $G_1\subseteq M$ is the maximal subgroup of $M$ associated with the idempotent $1$. By \Cref{P:idemp_is_inside_definable}, there exists a definable commutative idempotent semigroup $E(M)\subseteq E\subseteq M_1$. Notice that for every $g\in G$ and $e\in E$, \[ge=eg.\] Define \[M_2=\{m\in M_0:\exists g\in G,e\in E\quad m=ge\}.\] $M_2$ is the desired monoid. The furthermore is a standard corollary of the above proof. \end{proof} \begin{remark}\label{C:Cliff_is_inter} As before, it follows from the proof that any such monoid is an intersection of definable surjective Clifford monoids. \end{remark} We don't have an argument for Clifford monoids which are not necessarily surjective. But we do have a proof for a certain kind of inverse monoids. We'll need this result in \Cref{sec:S_G(M)}. \begin{theorem}\label{T:Needed_for_1_based} Let $M$ be an $\infty$-definable monoid in a stable structure, such that \begin{enumerate} \item Its unit group $G$ is definable, \item $E(M)$ is commutative, and \item for every $a\in M$ there exist $g\in G$ and $e\in E(M)$ such that \[a=ge.\] \end{enumerate} Then $M$ is contained in a definable monoid, extending the multiplication on $M$. This monoid also has these properties. \end{theorem} \begin{remark} Incidentally $M$ is an inverse monoid (recall the definition from Section \ref{subsec:Clifford}). It is obviously regular and the pseudo-inverse is unique since the idempotents commute (see the preliminaries). Also, as before, every such monoid is an intersection of definable ones. \end{remark} \begin{proof} Let $M\subseteq M_0$ be a definable set where the multiplication if defined and associative. By compactness, there exists a definable subset $M\subseteq M_1\subseteq M_0$ such that \begin{itemize} \item Associativity holds for $\leq 6$ elements of $M_1$; \item Any product of $\leq 6$ elements of $M_1$ is in $M_0$; \item $1$ is a neutral element of $M_1$; \item If $x$ and $y$ are idempotents of $M_1$ then $xy=yx$. \end{itemize} By Proposition \ref{P:idemp_is_inside_definable}, there exists a definable commutative idempotent semigroup $E(M)\subseteq E\subseteq M_1$. Let \[E_1=\{e\in E: \forall g\in G\: g^{-1}eg\in E\}.\] $E_1$ is still a definable commutatve idempotent semigroup that contains $E(M)$. Moreover for every $e\in E_1$ and $g\in G$, \[g^{-1}eg\in E_1.\] Define \[M_2=\{m\in M_0:\exists g\in G,e\in E_1\quad m=ge\}.\] $M_2$ is the desired monoid. Indeed if $g,h\in G$ and $e,f\in E_1$ then there exist $h^\prime\in G$ and $e^\prime\in E_1$ such that \[eh=h^\prime e^\prime\] thus \[ge\cdot hf=gh^\prime\cdot e^\prime f.\] \end{proof}
3,740
17,974
en
train
0.67.5
\section{The space of types $S_G(M)$ on a definable group}\label{sec:S_G(M)} Let $G$ be a definable group inside a stable structure $M$. Assume that $G$ is definable by a formula $G(x)$. Define $S_G(M)$ to be all the types of $S(M)$ which are on $G$. \begin{definition}\label{D:of product} Let $p,q\in S_G(M)$, define \[p\cdot q=tp(a\cdot b/M),\] where $a\models p, b\models q$ and $a\forkindep[M]b$. \end{definition} Notice that the above definition may also be stated in the following form: \[U\in p\cdot q\Leftrightarrow d_q(U)\in p.\] where $U$ is a formula and $d_q(U):=\{g\in G(M) :g^{-1}U\in q\}$ \cite{Newelski-stable-groups}. Thus, if $\Delta$ is a finite family of formulae, in order to restrict the multiplication to $S_{G,\Delta}(M)$, the set of $\Delta$-types on $G$, we'll need to consider invariant families of formulae: \begin{definition} Let $\Delta\subseteq L$ be a finite set of formulae. We'll say that $\Delta$ is ($G$-)invariant if the family of subsets of $G$ definable by instances of formulae from $\Delta$ is invariant under left and right translation in $G$. \end{definition} From now on, unless stated otherwise, we'll assume that $\Delta$ is a finite set of invariant formulae. For $\Delta_1\subseteq \Delta_2$ let \[r^{\Delta_2}_{\Delta_1}:S_{G,\Delta_2}(M)\to S_{G,\Delta_1}(M)\] be the restriction map. These are semigroup homomorphisms. Thus \[S_G(M)=\varprojlim_{\Delta}S_{G,\Delta}(M).\] In \cite{Newelski-stable-groups} Newelski shows that $S_{G,\Delta}(M)$ may be interpreted in $M^{eq}$ as an $\infty$-definable semigroup. Our aim is to show that these $\infty$-definable semigroups are in fact an intersection of definable ones and as a consequence that $S_G(M)$ is an inverse limit of definable semigroups of $M^{eq}$. \subsection{$S_{G,\Delta}$ is an intersection of definable semigroups} Let $\varphi(x,y)$ be a $G$-invariant formula. The proof that $S_{G,\varphi}(M)$ is interpretable as an $\infty$-definable semigroup in $M^{eq}$ is given by Newelski in \cite{Newelski-stable-groups}. We'll show that it may be given as an intersection of definable semigroups. \begin{proposition}\cite{Pillay}\label{P:def-in-stab} There exists $n\in\mathbb{N}$ and a formula $d_\varphi(y,u)$ such that for every $p\in S_{G,\varphi}(M)$ there exists a tuple $c_p\subseteq G$ such that \[d_\varphi(y,c_p)=(d_px)\varphi (x,y).\] Moreover, $d_\varphi$ may be chosen to be a positive boolean combination of $\varphi$-formulae. \end{proposition} Let $E_{d_\varphi}$ be the equivalence relation defined by \[c_1E_{d_\varphi} c_2 \Longleftrightarrow \forall y(d_\varphi (y,c_1)\leftrightarrow d_\varphi (y,c_2)).\] Set $Z_{d_\varphi}:=\nicefrac{M}{E_{d_\varphi}}$, it is the sort of canonical parameters for a potential $\varphi$-definition. \begin{remark} We may assume that $c_p$ is the canonical parameter for $d_\varphi (M,c_p)$, namely, that it lies in $Z_{d_\varphi}$. Just replace the formula $d_\varphi(y,u)$ with the formula \[\psi(y,v)=\forall u \left((\pi(u)=v)\to d_\varphi(y,u)\right),\] where $v$ lies in the sort $\nicefrac{M}{E_{d_\varphi}}$ and $\pi:M\to\nicefrac{M}{E_{d_\varphi}}$. \end{remark} Each element $c\in Z_{d_\varphi}$ corresponds to a complete (but not necessarily consistent) set of $\varphi$-formulae: \[p^0_c:=\{\varphi(x,a): a\in M \text{ and } \models d_\varphi(a,c)\}\cup\] \[\{\neg\varphi(x,a): a\in M \text{ and } \not\models d_\varphi(a,c)\}.\] \begin{remark} Notice that $p^0_c$ may not be closed under equivalence of formulae, but the set of canonical parameters $c\in Z_{d_\varphi}$ such that $p^0_c$ is closed under equivalence of formulae is the definable set: \[\{c\in Z_{d_\varphi}: \forall t_1 \forall t_2 \left(\varphi(x,t_1)\equiv \varphi(x,t_2)\to (d_\varphi(t_1,c)\leftrightarrow d_\varphi(t_2,c)\right)\}.\] Thus we may assume that we only deal with sets $p^0_c$ which are closed under equivalence of formulae. \end{remark} The set of $c\in Z_{d_\varphi}$ such that $p^0_c$ is $k$-consistent is definable: \[Z_{d_\varphi}^k=\{c\in Z_{d_\varphi} : p^0_c \text{ is k-consistent}\}.\] Define \[Z=\bigcap_{k< \omega} Z_{d_\varphi}^k.\] There is a bijection ($p\mapsto c_p$) between $S_{G,\varphi}(M)$ and $Z$. The following is a trivial consequence of \Cref{P:def-in-stab}: \begin{lemma} There exists a formula $\Phi(u,v,y)$ with $u,v$ in the sort $Z_{d_\varphi}$, such that \[\Phi(c_p,c_q,a) \Leftrightarrow \varphi(x,a)\in p\cdot q.\] Moreover, $\Phi$ is a positive boolean combination of $d_\varphi$-formulae (and so of $\varphi$-formulae as well). \end{lemma} \begin{proof} Since $\varphi$ is $G$-invariant, for simplicity we'll assume that $\varphi(x,y)$ is in fact of the form $\varphi(l\cdot x\cdot r,y)$. Let $c_p,c_q\subseteq G$ be tuples whose images in $Z_{d_\varphi}$ correspond to the $\varphi$-types $p,q\in S_{G,\varphi}(M)$, respectively. Remembering that $u=(u_{ij})_{1\leq i,j\leq n}$ is a tuple of variables, we may write \[d_\varphi(l,r,y,u)=\bigvee_{i<n}\bigwedge_{j<n}\varphi(l\cdot u_{ij}\cdot r,y).\] Since \[d_q(\varphi(b\cdot x\cdot c,a))=\{g\in G(M):\varphi((b\cdot g)\cdot x\cdot c,a)\in q)\}\] \[=\{g\in G(M) :\models d_\varphi(b\cdot g,c,a,c_q)\}\] and \[d_\varphi(b\cdot g,c,a,c_q)=\bigvee_{i<n}\bigwedge_{j<n}\varphi(b\cdot g\cdot ((c_q)_{ij}\cdot c),a),\] we get that \[\varphi(b\cdot x\cdot c,a)\in p\cdot q \Longleftrightarrow \models \bigvee_{i<n}\bigwedge_{j<n}d_\varphi(b,((c_q)_{ij}\cdot c),a,c_p).\] \end{proof} Using this we define a partial binary operation on $Z_{d_\varphi}$: \begin{definition} For $c_1,c_2,d\in Z_{d_\varphi}$, we'll say that $c_1\cdot c_2=d$ if $d$ is the unique element of $Z_{d_\varphi}$ that satisfies \[ \models d_\varphi(a,d)\Longleftrightarrow \models \Phi(c_1,c_2,a).\] for all $a\in M$. \end{definition} By compactness, there exists $k\in\mathbb{N}$ such that for all $c_1,c_2\in Z^k_{d_\varphi}$ there exists a unique $d\in Z_{d_\varphi}$ such that $c_1\cdot c_2=d$. For simplicity, we'll assume that this happens for $Z^1_{d_\varphi}$. \begin{theorem} $Z$ is contained in a definable semigroup extending the multiplication on $Z$. \end{theorem} \begin{proof} By compactness there exists $k\in\mathbb{N}$ such that the multiplication is associative on $Z^k_{d_\varphi}$ and the product of two elements of $Z^k_{d_\varphi}$ is in $Z^1_{d_\varphi}$. For simplicity, let's assume that this happens for $Z^2_{d_\varphi}$. \begin{claim} If $c_p\in Z$ and $c\in Z^2_{d_\varphi}$ then $c_p\cdot c\in Z^2_{d_\varphi}.$ \end{claim} Let $U_1,U_2\in p_{c_p\cdot c}$, hence \[\{g\in G(M):g^{-1}U_1\in p^0_c\},\{g\in G(M):g^{-1}U_2\in p^0_c\}\in p.\] Since $p$ is consistent, there exists $g\in G(M)$ such that $g^{-1}U_1,g^{-1}U_2\in p^0_c$. Since $c\in Z^2_{d_\varphi}$, $p^0_c$ is $2$-consistent. Thus, the claim follows. Define \[\widehat{Z^2_{d_\varphi}}=\{c\in Z^2_{d_\varphi}: c\cdot Z^2_{d_\varphi}\subseteq Z^2_{d_\varphi}\}.\] $\widehat{Z^2_{d_\varphi}}$ is the desired definable semigroup. \end{proof} \begin{corollary} $Z=S_{G,\varphi}(M)$ is an intersection of definable semigroups. \end{corollary} Looking even closer at the above proof we may show that $S_G(M)$ is an inverse limit of definable semigroups: Assume that $\Delta_2=\{\varphi_1,\varphi_2\}$ and $\Delta_1=\{\varphi_1\}$. In the above notations: \[Z_{\Delta_2}=Z_{d_{\varphi_1}}\times Z_{d_{\varphi_2}}\] For $c=\langle c_1,c_2\rangle \in Z_{\Delta_2}$ define \[p^0_c=p^0_{c_1} \cup p^0_{c_2}\] and then \[Z(\Delta_2)=\bigcap Z^k_{\Delta_2}\] similarly. For $c,c',d\in Z(\Delta_2)$ we'll say that $c\cdot c'=d$ if $d$ is the unique element $d\in Z_(\Delta_2)$ that satisfies: \[c_1\cdot c'_1=d_1 \text{ and } c_2\cdot c'_2=d_2\] As before, we assume that such a unique element exists already for any pair of elements in $Z^1_{\Delta_2}=Z^1_{\varphi_1}\times Z^1_{\varphi_2}$. The restriction maps $r^{\Delta_2}_{\Delta_1}:Z^1_{\Delta_2}\to Z^1_{\Delta_1}$ are definable homomorphisms. Generally, for every $\Delta=\{\varphi_1,\cdots, \varphi_n\}$ and $i<\omega$ \[Z_\Delta^i=Z_{\varphi_1}^i\times\cdots \times Z_{\varphi_n}^i\] the multiplication is coordinate-wise. So the restriction commutes with the inclusion. As a result, \begin{theorem}\label{T:invlim} $S_G(M)$ is an inverse limit of definable semigroups: \[\varprojlim_{\Delta,i} Z^\Delta_i=\varprojlim_\Delta S_{G,\Delta}(M).\] \end{theorem}
3,167
17,974
en
train
0.67.6
\subsection{The case where $S_G(M)$ is an inverse monoid}\label{ss:S_G inverse} We would like to use the Theorems we proved in \Cref{S:big-wedge-defin} to improve the result in the situation where $S_G(M)$ is an inverse monoid. We'll first see that this situation might occur. Notice that the inverse operation $^{-1}$ on $S_G(M)$ is an involution. \begin{proposition}\cite{Lawson}\label{P:semi-topo semigroup} Let $S$ be a compact semitopological $*$-semigroup (a semigroup with involution) with a dense unit group $G$. Then the following are equivalent for any element $p\in S$: \begin{enumerate} \item $p=pp^*p$; \item $p$ has a unique quasi-inverse; \item $p$ has an quasi-inverse. \end{enumerate} \end{proposition} \begin{remark} In the situation of $G(M)\hookrightarrow S_G(M)$, the above Proposition can be proved directly using model theory and stabilizers. \end{remark} Translating the above result to our situation and using results in $1$-based groups (see \cite{Pillay}): \begin{corollary} The following are equivalent: \begin{enumerate} \item for every $p\in S_G(M)$, $p$ is the generic of a right coset of a connected $M$-$\infty$-definable subgroup of $G$; \item for every $p\in S_G(M)$, $p\cdot p^{-1}\cdot p=p$; \item $S_G(M)$ is an inverse monoid; \item $S_G(M)$ is a regular monoid. \end{enumerate} Thus if $G$ is $1$-based then $S_G(M)$ is an inverse monoid. \end{corollary} \begin{proof} $(2),(3)$ and $(4)$ are equivalent by Proposition \ref{P:semi-topo semigroup} and $(1)$ is equivalent to $(2)$ by \cite[Lemma 1.2]{Kowalski} \end{proof} With a little more work one may characterise when $S_G(M)$ is a Clifford Monoid. \begin{definition} A right-and-left coset of a subgroup $H$ is a right coset $Ha$ such that $aH=Ha$. \end{definition} \begin{proposition}\cite{Kowalski} $p\in S_G(M)$ is a generic of a right-and-left coset of an $M$-$\infty$-definable connected subgroup of $G$ if and only if $p\cdot p\cdot p^{-1}=p$. \end{proposition} By using the following easy lemma \begin{lemma} Assuming $pp^{-1}p=p$, $$pp^{-1}=p^{-1}p \Leftrightarrow ppp^{-1}=p.$$ \end{lemma} we get \begin{proposition} The following are equivalent: \begin{enumerate} \item every $p\in S_G(M)$ is the generic of a right-and-left coset of a connected $M$-$\infty$-definable subgroup of $G$; \item $S_G(M)$ is a Clifford monoid. \end{enumerate} \end{proposition} As a result, it may happen that $S_G(M)$ is an inverse (or Cliford) monoid. One may wonder if in these cases we may strengthen the result. \begin{lemma} If $S_G(M)$ is a Clifford Monoid then so is $S_{G,\Delta}(M)$. They same goes for inverse monoids. \end{lemma} \begin{proof} In order to show that $S_{G,\Delta}(M)$ is a Clifford Monoid we must show that it is regular and that the idempotents are central. Indeed this follows from the fact that the restriction maps are surjective homomorphisms and that if $q|_\Delta$ is an idempotent there exists an idempotent $p\in S_G(M)$ such that $p|_\Delta=q|_\Delta$ \cite{Newelski-stable-groups}. \end{proof} Assume that $\Delta$ is a finite invariant set of formulae. We'll show that if $S_G(M)$ is an inverse monoid then $S_{G,\Delta}(M)$ is an intersection of definable inverse monoids. We recall some definition from \cite{Newelski-stable-groups}. Since $\Delta$ is invariant for $p\in S_{G,\Delta}(M)$ we have a map \[d_p:Def_{G,\Delta}(M)\to Def_{G,\Delta}(M)\] defined by \[U\mapsto \{g\in G(M): g^{-1}U\in p\}.\] Here $Def_{G,\Delta}(M)$ are the $\Delta$-$M$-definable subsets of $G(M)$. Furthermore, for $p\in S_{G,\Delta}(M)$ define \[Ker(d_p)=\{U\in Def_{G,\Delta}(M) : d_p(U)=\emptyset\}.\] \begin{lemma}\cite{Newelski-stable-groups} Let $\Delta$ be a finite invariant set of formulae and $p\in S_{G,\Delta}(M)$ be an idempotent. Then \[\{q\in S_{G,\Delta}(M):Ker(d_q)=Ker(d_p)\}=\{g\cdot p: g\in G(M)\}=G(M)p.\] In particular it is definable (in $M^{eq}$). \end{lemma} \begin{corollary} If $S_{G,\Delta}(M)$ is a regular semigroup then \[S_{G,\Delta}(M)=\bigcup_{p \text{ idempotent}} G(M)p.\] \end{corollary} \begin{proof} Let $q\in S_{G,\Delta}(M)$. By regularity there exists $\tilde{q}$ such that \[q=q\tilde{q}q \text { and } \tilde{q}=\tilde{q}q\tilde{q}.\] $\tilde{q}q$ is the desired idempotent and $Ker(q)=Ker(\tilde{q}q).$ \end{proof} Recall \Cref{T:Needed_for_1_based}. Notice that a semigroup $S$ fulfilling the requirements of the Theorem is an inverse monoid. It is obviously regular and the pseudo-inverse is unique since the idempotents commute. We get the following: \begin{corollary} If $S_G(M)$ is an inverse semigroup then $S_{G,\Delta}(M)$ is an intersection of definable inverse semigroups. \end{corollary} \begin{proof} Since $S_G(M)$ is inverse so are the $S_{G,\Delta}(M)$. By \cite{Newelski-stable-groups} the unit group of $S_{G,\Delta}(M)$ is definable and by the previous corollary for every $p\in S_{G,\Delta}(M)$ there exists an idempotent $e$ and $g\in G(M)$ such that \[p=ge.\] \end{proof} \paragraph*{Acknowledgement} I would like to thank my PhD advisor, Ehud Hrushovski for our discussions, his ideas and support, and his careful reading of previous drafts. \end{document}
1,865
17,974
en
train
0.68.0
\begin{document} \begin{abstract} Dolgachev surfaces are simply connected minimal elliptic surfaces with $p_g=q=0$ and of Kodaira dimension 1. These surfaces are constructed by logarithmic transformations of rational elliptic surfaces. In this paper, we explain the construction of Dolgachev surfaces via $\Q$-Gorenstein smoothing of singular rational surfaces with two cyclic quotient singularities. This construction is based on the paper\,\cite{LeePark:SimplyConnected}. Also, some exceptional bundles on Dolgachev surfaces associated with $\Q$-Gorenstein smoothing have been constructed based on the idea of Hacking\,\cite{Hacking:ExceptionalVectorBundle}. In the case if Dolgachev surfaces were of type $(2,3)$, we describe the Picard group and present an exceptional collection of maximal length. Finally, we prove that the presented exceptional collection is not full, hence there exists a nontrivial phantom category in the derived category.\par \end{abstract} \maketitle \setcounter{tocdepth}{1}\tableofcontents \section {Introduction} In the last few decades, the derived category $\D^{\rm b}(S)$ of a nonsingular projective variety $S$ has been extensively studied by algebraic geometers. One of the attempts is to find an exceptional collection that is a sequence of objects $E_1,\ldots,E_n$ such that \[ \Ext^k(E_i,E_j) = \left\{ \begin{array}{cl} 0 & \text{if}\ i > j\\ 0 & \text{if}\ i=j\ \text{and}\ k\neq 0 \\ \C & \text{if}\ i=j\ \text{and}\ k=0. \end{array} \right. \] There were many approaches to find exceptional collections of maximal length if $S$ is a nonsingular projective surface with $p_g = q=0$. Gorodentsev and Rudakov\,\cite{GorodenstevRudakov:ExceptionalBundleOnPlane} have classified all possible exceptional collections in the case $S = \P^2$, and exceptional collections on del Pezzo surfaces has been studied by Kuleshov and Orlov\,\cite{KuleshovOrlov:ExceptionalSheavesonDelPezzo}. For Enriques surfaces, Zube \cite{Zube:ExceptionalOnEnriques} gives an exceptional collection of length $10$, and the orthogonal part is studied by Ingalls and Kuznetsov\,\cite{IngallsKuznetsov:EnriquesQuarticDblSolid} for nodal Enriques surfaces. After initiated by the work of B\"ohning, Graf von Bothmer, and Sosna\,\cite{BGvBS:ExeceptCollec_Godeaux}, there also have come numerous results on the surfaces of general type\,({\it e.g.} \cite{GalkinShinder:Beauville,BGvBKS:DeterminantalBarlowAndPhantom,AlexeevOrlov:DerivedOfBurniat,Coughlan:ExceptionalCollectionOfGeneralType,KSLee:Isogenus_1,GalkinKatzarkovMellitShinder:KeumFakeProjective,Keum:FakeProjectivePlanes}). For surfaces with Kodaira dimension one, such exceptional collections have not been shown to exist, thus it is a natural attempt to find an exceptional collection in $\D^{\rm b}(S)$. In this paper, we use the technique of $\Q$-Gorenstein smoothing to study the case $\kappa(S) = 1$. As far as the authors know, this is the first time to establish an exceptional collection of maximal length on a surface with Kodaira dimension one. The key ingredient is the method of Hacking\,\cite{Hacking:ExceptionalVectorBundle}, which associates a $T_1$-singularity $(P \in X)$ with an exceptional vector bundle on the general fiber of a $\Q$-Gorenstein smoothing of $X$. A $T_1$-singularity is the cyclic quotient singularity \[ (0 \in \A^2 \big/ \langle \xi \rangle),\quad \xi \cdot(x,y) = (\xi x, \xi^{na-1}y), \] where $n > a > 0$ are coprime integers and $\xi$ is the primitive $n^2$-th root of unity\,(see the works of Koll\'ar and Shepherd-Barron\,\cite{KSB:CompactModuliOfSurfaces}, Manetti\,\cite{Manetti:NormalDegenerationOfPlane}, and Wahl\,\cite{Wahl:EllipticDeform,Wahl:SmoothingsOfNormalSurfaceSings} for the classification of $T_1$-singularities and their smoothings). In the paper\,\cite{LeePark:SimplyConnected}, Lee and Park constructed new surfaces of general type via $\Q$-Gorenstein smoothings of projective normal surfaces with $T_1$-singularities. Motivated by \cite{LeePark:SimplyConnected}, substantial amount of works was carried out, especially on (1) construction of new surfaces of general type\,({\it e.g.}\,\cite{KeumLeePark:GeneralTypeFromElliptic,LeeNakayama:SimplyGenType_PositiveChar,ParkParkShin:SimplyConnectedGenType_K3,ParkParkShin:SimplyConnectedGenType_K4}); (2) investigation of the KSBA boundaries of the moduli of space of surfaces of general type\,({\it e.g.} \cite{HackingTevelevUrzua:FlipSurfaces,Urzua:IdentifyingNeighbors}). Our approach is based on rather different perspective: \begin{center} Construct $S$ via a smoothing of a singular surface as in \cite{LeePark:SimplyConnected}, and apply \cite{Hacking:ExceptionalVectorBundle} to investigate $\Pic S$. \end{center} We study the case $S={}$a Dolgachev surface with two multiple fibers of multiplicities $2$ and $3$, and give an explicit $\Z$-basis for the N\'eron-Severi lattice of $S$\,(Theorem~\ref{thm:Synop_NSLattice}). Afterwards, we find an exceptional collection of line bundles of maximal length in $\D^{\rm b}(S)$\,(Theorem~\ref{thm:Synop_ExceptCollection_MaxLength}). \subsection*{Notations and Conventions} Throughout this paper, everything will be defined over the field of complex numbers. A surface is an irreducible projective variety of dimension two. If $T$ is a scheme of finite type over $\C$ and $t \in T$ a closed point, then we use $(t \in T)$ to indicate the analytic germ. Let $n > a > 0$ be coprime integers, and let $\xi$ be the $n^2$-th root of unity. The $T_1$-singularity \[ ( 0 \in \A^2 \big/ \langle \xi \rangle ),\quad \xi\cdot(x,y) = (\xi x , \xi^{na-1}y) \] will be denoted by $\bigl( 0 \in \A^2 \big/ \frac{1}{n^2}(1,na-1) \bigr)$. If two divisors $D_1$ and $D_2$ are linearly equivalent, we write $D_1 = D_2$ if there is no ambiguity. Two $\Q$-Cartier Weil divisors $D_1,D_2$ are $\Q$-linearly equivalent, denoted by $D_1 \equiv D_2$, if there exists $r \in \Z_{>0}$ such that $rD_1 = rD_2$. Let $S$ be a nonsingular projective variety. The following invariants are associated with $S$. \begin{itemize}[fullwidth,itemindent=10pt] \item The geometric genus $p_g(S) = h^2(\mathcal O_S)$. \item The irregularity $q(S) = h^1(\mathcal O_S)$. \item The holomorphic Euler characteristic $\chi(S)$. \item The N\'eron-Severi group $\op{NS}(S) = \Pic S / \Pic^0 S$, where $\Pic^0 S$ is the group of divisors algebraically equivalent to zero. \end{itemize} Since the definitions of Dolgachev surfaces vary in literature, we fix our definition. \begin{definition} Let $q > p > 0$ be coprime integers. A \emph{Dolgachev surface $S$ of type $(p,q)$} is a minimal, simply connected, nonsingular, projective surface with $p_g(S) = q(S) = 0$ and of Kodaira dimension one such that there are exactly two multiple fibers of multiplicities $p$ and $q$. \end{definition} In the sequel, we will be given a degeneration $S \rightsquigarrow X$ from a nonsingular projective surface $S$ to a projective normal surface $X$, and compare information between them. We use the superscript ``$\gen$'' to emphasize this correlation. For example, we use $X^\gen$ instead of $S$. \subsection*{Synopsis of the paper} In Section~\ref{sec:Construction}, we construct a Dolgachev surface $X^\gen$ of type $(2,n)$ following the technique of Lee and Park\,\cite{LeePark:SimplyConnected}. We begin with a pencil of plane cubics generated by two general nodal cubics, which meet at nine different points. The pencil defines a rational map $\P^2 \dashrightarrow \P^1$, undefined at the nine points of intersection. Blowing up the nine intersection points resolves the indeterminacy of $\P^2 \dashrightarrow \P^1$, hence yields a rational elliptic surface. After additional blow ups, we get two special fibers \[ F_1 := C_1 \cup E_1,\quad\text{and}\quad F_2:= C_2\cup E_2\cup \ldots \cup E_{r+1}. \] Let $Y$ denote the resulting rational elliptic surface with the general fiber $C_0$, and let $p \colon Y \to \P^2$ denote the blow down morphism. Contracting the curves in the $F_1$ fiber\,(resp. $F_2$ fiber) except $E_1$\,(resp. $E_{r+1}$), we get the morphism $\pi \colon Y \to X$ to a projective normal surface $X$ with two $T_1$-singularities of types \[ (P_1 \in X) \simeq \Bigl( 0 \in \A^2 \Big/ \frac{1}{4}(1,1) \Bigr) \quad \text{and}\quad (P_2 \in X) \simeq \Bigl( 0 \in \A^2 \Big/ \frac{1}{n^2}(1,na-1) \Bigr) \] for coprime integers $n > a > 0$. Note that the numbers $n,a$ are determined by the formula \[ \frac{n^2}{na-1} = k_1 - \frac{1}{ k_2 - \frac{1}{\ldots -\frac{1}{k_r}} }, \] where $-k_1,\ldots,-k_r$ are the self-intersection numbers of the curves in the chain $\{C_2,\ldots,E_r\}$\,(with the suitable order). We prove the formula\,(Proposition~\ref{prop:SingularSurfaceX}) \begin{equation} \pi^* K_X \equiv - C_0 + \frac{1}{2}C_0 + \frac{n-1}{n}C_0, \label{eq:Synop_QuasiCanoncialBdlFormula} \end{equation} which resembles the canonical bundle formula for minimal elliptic surfaces\,\cite[p.~213]{BHPVdV:Surfaces}. We then obtain $X^\gen$ by taking a general fiber of a $\Q$-Gorenstein smoothing of $X$. Then, since the divisor $\pi_* C_0$ is away from singularities of $X$, it moves to a nonsingular elliptic curve $C_0^\gen$ along the deformation $X \rightsquigarrow X^\gen$. We prove that the linear system $\lvert C_0^\gen \rvert$ defines an elliptic fibration $f^\gen \colon X^\gen \to \P^1$. Comparing (\ref{eq:Synop_QuasiCanoncialBdlFormula}) with the canonical bundle formula on $X^\gen$, we achieve the following theorem. \begin{theorem}[see Theorem~\ref{thm:SmoothingX} for details]\label{thm:Synop_NSLattice} Let $\varphi \colon \mathcal X \to (0 \in T)$ be a one parameter $\Q$-Gorenstein smoothing of $X$ over a smooth curve germ. Then for a general point $0 \neq t_0 \in T$, the fiber $X^\gen := \mathcal X_{t_0}$ is a Dolgachev surface of type $(2,n)$. \end{theorem}
3,367
67,457
en
train
0.68.1
We jump into the case $a=1$ in Section~\ref{sec:ExcepBundleOnX^g}, and explain the construction of exceptional vector bundles\,(mostly line bundles) on $X^\gen$ associated with the degeneration $X^\gen \rightsquigarrow X$ using the method developed in \cite{Hacking:ExceptionalVectorBundle}. Let $\iota \colon Y \to \tilde X_0$ be the contraction of $E_2,\ldots,E_r$. Then, $Z_1 := \iota(C_1)$ and $Z_2 := \iota(C_2)$ are smooth rational curves. There exists a proper birational morphism $\Phi \colon \tilde{\mathcal X} \to \mathcal X$\,(a weighted blow up at the singularities of $X = \mathcal X_0$) such that the central fiber $\tilde{\mathcal X}_0 := \Phi^{-1}(\varphi^{-1}(0))$ is described as follows: it is the union of $\tilde X_0$, the projective plane $W_1 = \P^2_{x_1,y_1,z_1}$, and the weighted projective plane $W_2 = \P_{x_2,y_2,z_2}(1, n-1, 1)$ attached along \[ Z_1 \simeq (x_1y_1=z_1^2) \subset W_1,\quad\text{and}\quad Z_2 \simeq (x_2y_2=z_2^n) \subset W_2. \] Intersection theory on $W_1$ and $W_2$ tells $\mathcal O_{W_1}(1)\big\vert_{Z_1} = \mathcal O_{Z_1}(2)$ and $\mathcal O_{W_2}(n-1)\big\vert_{Z_2} = \mathcal O_{Z_2}(n)$. The central fiber $\tilde{\mathcal X}_0$ has three irreducible components\,(disadvantage), but each component is more manageable than $X$\,(advantage). We work with the smoothing $\tilde{\mathcal X}/(0 \in T)$ instead of $\mathcal X / (0\in T)$. The general fiber of $\tilde{\mathcal X}/(0\in T)$ does not differ from $\mathcal X/(0\in T)$, hence it is the Dolgachev surface $X^\gen$. If $D$ is a divisor on $Y$ satisfying \begin{equation} (D.C_1)=2d_1 \in 2\Z,\ (D.C_2)=nd_2 \in n\Z, \text{ and } (D.E_2) = \ldots = (D.E_r) = 0, \label{eq:Synop_GoodDivisorOnY} \end{equation} then there exists a line bundle $\tilde{\mathcal D}$ on $\tilde{\mathcal X}_0$ such that \[ \tilde{\mathcal D}\big\vert_{\tilde X_0} \simeq \mathcal O_{\tilde X_0}(\iota_*D),\quad \tilde{\mathcal D}\big\vert_{W_1} \simeq \mathcal O_{W_1}(d_1),\quad \text{and}\quad \tilde{\mathcal D}\big\vert_{W_2} \simeq \mathcal O_{W_2}((n-1)d_2). \] It can be shown that the line bundle $\tilde{\mathcal D}$ is exceptional, hence it deforms uniquely to give a bundle $\mathscr D$ on the family $\tilde{\mathcal X}$. In this method, we construct $D^\gen \in\Pic X^\gen$ as the divisor associated with the line bundle $\mathscr D\big\vert_{X^\gen}$. There is a natural topological description of $D^\gen$. Let $B_i \subset X$ be a contractible ball around the singularity $P_i$ and let $M_i$ be the Milnor fiber associated to the smoothing $(P_i \in \mathcal X) / (0 \in T)$. Then $X^\gen$ is diffeomorphic to $(X \setminus (B_1 \cup B_2) ) \cup (M_1 \cup M_2)$, where the union is made by pasting along the natural diffeomorphism $\partial B_i \simeq \partial M_i$\,(see \cite[p.~39]{Manetti:ModuliOfDiffeo}). By Proposition~\ref{prop:Hacking_Specialization}, the relative homology sequence for the pair $(X,\, M_1 \cup M_2)$ reads \[ 0 \to H_2(X^\gen,\Z) \to H_2(X,\Z) \to H_1(M_1,\Z) \oplus H_1(M_2,\Z). \] Since $H_1(M_1,\Z) \simeq \Z/2\Z$ and $H_2(M_2,\Z) \simeq \Z/n\Z$, if $D \in \Pic Y$ is a divisor which fits into the condition (\ref{eq:Synop_GoodDivisorOnY}), then $[\pi_*D] \in H_2(X,\Z)$ maps to the zero element in $H_1(M_1, \Z) \oplus H_1(M_2,\Z)$. Thus, there exists a preimage of $[\pi_*D] \in H_2(X,\Z)$ along $H_2(X^\gen, \Z) \to H_2(X,\Z)$, which is nothing but the Poincar\'e dual of the first Chern class of $\mathcal O_{X^\gen}(D^\gen)$. Section~\ref{sec:NeronSeveri} concerns the case $n=3$ and $a=1$. Let $D$, $\tilde{\mathcal D}$ and $D^\gen$ be chosen as above. There exists a short exact sequence \begin{equation} 0 \to \tilde{\mathcal D} \to \mathcal O_{\tilde X_0}(\iota_* D) \oplus \mathcal O_{W_1}(d_1) \oplus \mathcal O_{W_2}(2d_2) \to \mathcal O_{Z_1}(2d_1) \oplus \mathcal O_{Z_2}(3d_2) \to 0. \label{eq:Synop_CohomologySequence} \end{equation} This expresses $\chi(\tilde{\mathcal D})$ in terms of $\chi(\iota_*D)$, $d_1$, and $d_2$. Since the Euler characteristic is deformation invariant, we get $\chi(D^\gen) = \chi(\tilde{\mathcal D})$. Furthermore, it can be proved that $(C_0. D) = (C_0^\gen . D^\gen)$. This implies that $(C_0 . D) = (6 K_{X^\gen} . D^\gen)$. The Riemann-Roch formula reads \[ (D^\gen)^2 = \frac{1}{6}(C_0. D) + 2 \chi(\tilde{\mathcal D}) - 2, \] which is a clue for discovering the N\'eron-Severi lattice $\op{NS}(X^\gen)$. This leads to the first main theorem of this paper: \begin{theorem}[$={}$Theorem~\ref{thm:Picard_ofGeneralFiber}] Let $H \in \Pic \P^2$ be the hyperplane divisor, and let $L_0 = p^*(2H)$. Consider the following correspondences of divisors\,(see Figure~\ref{fig:Configuration_Basic}). \[ \begin{array}{c|c|c|c} \Pic Y & F_i - F_j & p^*H - 3F_9 & L_0 \\ \hline \Pic X^\gen & F_{ij}^\gen & (p^*H - 3F_9)^\gen & L_0^\gen \\[1pt] \end{array}\raisebox{-0.9\baselineskip}[0pt][0pt]{\,.} \]\vskip+5pt\noindent Define the divisors $\{G_i^\gen\}_{i=1}^{10} \subset \Pic X^\gen$ as follows: \begin{align*} G_i^\gen &= -L_0^\gen + 10K_{X^\gen} + F_{i9}^\gen,\quad i=1,\ldots,8;\\ G_9^\gen &= -L_0^\gen + 11K_{X^\gen};\\ G_{10}^\gen &= -3L_0^\gen + (p^*H - 3F_9)^\gen + 28K_{X^\gen}. \end{align*} Then the intersection matrix $\bigl( ( G_i^\gen . G_j^\gen) \bigr)$ is \[ \left[ \begin{array}{cccc} -1 & \cdots & 0 & 0 \\ \vdots & \ddots & \vdots & \vdots \\ 0 & \cdots & -1 & 0 \\ 0 & \cdots & 0 & 1 \end{array} \right]\raisebox{-2\baselineskip}[0pt][0pt]{.} \] In particular, $\{G_i^\gen\}_{i=1}^{10}$ is a $\Z$-basis for the N\'eron-Severi lattice $\op{NS}(X^\gen)$. \end{theorem} \noindent We point out that the assumption $n=3$ is crucial for the definition of $G_{10}^\gen$. Indeed, its definition is motivated by the proof of \cite[Theorem~3.1]{Vial:Exceptional_NeronSeveriLattice}. The divisor $G_{10}^\gen$ has been chosen to satisfy \[ K_{X^\gen} = G_1^\gen + \ldots + G_9^\gen - 3G_{10}^\gen, \] which does not make sense for $n>3$ as $K_{X^\gen}$ is not primitive. In Section~\ref{sec:ExcepCollectMaxLength} we continue to assume $n=3$, $a=1$. We give the proof of the second main theorem of the paper: \begin{theorem}[$={}$Theorem~\ref{thm:ExceptCollection_MaxLength} and Corollary~\ref{cor:Phantom}]\label{thm:Synop_ExceptCollection_MaxLength} Assume that $X^\gen$ is originated from a cubic pencil $\lvert \lambda p_*C_1 + \mu p_*C_2\rvert$ generated by two general nodal cubics. Then, there exists a semiorthogonal decomposition \[ \bigr\langle \mathcal A,\ \mathcal O_{X^\gen},\ \mathcal O_{X^\gen}(G_1^\gen),\ \ldots,\ \mathcal O_{X^\gen}(G_{10}^\gen),\ \mathcal O_{X^\gen}(2G_{10}^\gen) \bigr\rangle \] of $\D^{\rm b}(X^\gen)$, where $\mathcal A$ is nontrivial phantom category\,({\it i.e.} $K_0(\mathcal A) = 0$, $\op{HH}_\bullet(\mathcal A) = 0$, but $\mathcal A\not\simeq 0$). \end{theorem} The proof contains numerous cohomology computations. As usual, the main ingredients which relate the cohomologies between $X$ and $X^\gen$ are the upper-semicontinuity and the invariance of Euler characteristics. The cohomology long exact sequence of (\ref{eq:Synop_CohomologySequence}) begins with \[ 0 \to H^0(\tilde{\mathcal D}) \to H^0(\iota_*D) \oplus H^0(\mathcal O_{W_1}(d_1)) \oplus H^0(\mathcal O_{W_2}(2d_2)) \to H^0(\mathcal O_{Z_1}(2d_1)) \oplus H^0(\mathcal O_{Z_2}(3d_2)). \] We prove that if $(D.C_1) = 2d_1 \leq 2$, $(D.C_2) = 3d_2 \leq 3$, and $(D.E_2)=0$, then $h^0(\tilde{\mathcal D}) \leq h^0(D)$. This gives an upper bound of $h^0(D^\gen)$. By Serre duality, $h^2(D^\gen) = h^0(K_{X^\gen} - D^\gen)$, hence we are able to use the same method to estimate the upper bound. After the computations of upper bounds of $h^0(D^\gen)$ and $h^2(D^\gen)$, the upper bound of $h^1(D^\gen)$ can be examined by looking at $\chi(D^\gen)$. For any divisor $D^\gen$ which appears in the proof of Theorem~\ref{thm:Synop_ExceptCollection_MaxLength}, at least one of $\{h^0(D^\gen), h^2(D^\gen)\}$ is zero, and the other one is bounded by $\chi(D^\gen)$. Then, $h^1(D^\gen)=0$ and all the three numbers $(h^p(D^\gen) : p=0,1,2)$ are exactly evaluated. One obstruction to this argument is the condition $d_1, d_2 \leq 1$, but it can be dealt with the following observation: \begin{center} if a line bundle on $X^\gen$ is obtained from either $C_1$ or $2C_2+E_2$, then it is trivial. \end{center} Perturbing $D$ by $C_1$ and $2C_2+E_2$, we can adjust the numbers $d_1$, $d_2$. The proof reduces to find a suitable upper bound of $h^0(D)$. One of the very first trials is to find a smooth rational curve $C \subset Y$ such that $(D.C)$ is small. Then, by the short exact sequence $0 \to \mathcal O_Y(D-C) \to \mathcal O_Y(D) \to \mathcal O_C(D) \to 0$, we get $h^0(D) \leq h^0(D-C) + \min\{ 0,\,(D.C)+1\}$. Replace $D$ by $D-C$ and repeat this procedure. It eventually stops when the value of $h^0(D-C)$ is understood immediately\,({\it e.g.} when $D-C$ is linearly equivalent to a negative sum of effective curves). This will give an upper bound of $h^0(D)$. This method sometimes gives a sharp bound of $h^0(D)$, but sometimes not. Indeed, some cohomologies depend on the configuration of generating cubics $p_*C_1$, $p_*C_2$ of the cubic pencil, while the previous numerical argument cannot capture the configuration of $p_*C_1$ and $p_*C_2$. For those cases, we find an upper bound of $h^0(D)$ as follows. Assume that $D$ is an effective divisor. Then, $p_*D \subset \P^2$ is a plane curve. The divisor form of $D$ determines the degree of $p_*D$ and some conditions that $p_*D$ must admit. For example, consider $D = p^*H - E_1$. The exceptional curve $E_1$ is obtained by blowing up the node of $p_*C_1$. Hence, $p_*D$ must be a line passing through the node of $p_*C_1$. In this way, conditions can be represented by an ideal $\mathcal I \subset \mathcal O_{\P^2}$. Hence, proving $h^0(D) \leq r$ reduces to proving $h^0(\mathcal O_{\P^2}\bigl(\deg p_*D) \otimes \mathcal I\bigr) \leq r$. The latter one can be computed via a computer-based approach\,(Macaulay2). Finally, $\mathcal A\not\simeq 0$ is guaranteed by the argument involving anticanonical pseudoheight due to Kuznetsov\,\cite{Kuznetsov:Height}. We remark that a (simply connected) Dolgachev surface of type $(2,n)$ cannot have an exceptional collection of maximal length for any $n > 3$ as explained in \cite[Theorem~3.10]{Vial:Exceptional_NeronSeveriLattice}. Also, Theorem~\ref{thm:Synop_ExceptCollection_MaxLength} gives an answer to the question posed in \cite[Remark~3.12]{Vial:Exceptional_NeronSeveriLattice}.
4,157
67,457
en
train
0.68.2
\section{Construction of Dolgachev Surfaces} \label{sec:Construction} Let $n$ be an odd integer. This section presents the construction of Dolgachev surfaces of type $(2,n)$. The construction follows the technique introduced in \cite{LeePark:SimplyConnected}. Let $C_1,C_2 \subseteq \P^2$ be general nodal cubic curves meeting at $9$ different points, and let $Y' = \op{Bl}_9\P^2 \to \P^2$ be the blow up at the intersection points. Then the cubic pencil $\lvert \lambda C_1 + \mu C_2\rvert$ defines an elliptic fibration $Y' \to \P^1$, with two special fibers $C_1'$ and $C_2'$ (which correspond to the proper transforms of $C_1$ and $C_2$, respectively). Blowing up the nodes of $C_1'$ and $C_2'$, we obtain the $(-1)$-curves, say $E_1$ and $E_2$ respectively. Also, blowing up one of the intersection points of $C_2''$\,(the proper transform of $C_2'$) and $E_2$, we obtain the configuration described in Figure~\ref{fig:Configuration_Basic}. \begin{figure} \caption{Configuration of the divisors in the surface obtained by blowing up two points of $Y'$.} \label{fig:Configuration_Basic} \end{figure} The divisors $F_1,\ldots,F_9$ are the proper transforms of the exceptional fibers of $Y' = \op{Bl}_9\P^2 \to \P^2$. The numbers in the parentheses are self-intersection numbers of the corresponding divisors. On the fiber $C_2'' \cup E_2' \cup E_3$, we can think of two different blow ups as the following dual intersection graphs illustrate. \[ \begin{tikzpicture}[scale=1] \draw(0,0) node[anchor=center] (C2) {}; \draw(40pt,0pt) node[anchor=center] (E2) {}; \draw(20pt,15pt) node[anchor=center] (E3) {}; \node[below,shift=(90:1pt)] at (C2.south) {$\scriptstyle -5$}; \node[below,shift=(90:1pt)] at (E2.south) {$\scriptstyle -2$}; \node[above] at (E3.north) {$\scriptstyle -1$}; \fill[red] (C2) circle (1.5pt); \fill[blue] (E2) circle (1.5pt); \fill[black] (E3) circle (1.5pt); \draw[red,-] (C2.north east) -- (E3.south west) node[above left, align=center, midway]{\tiny L}; \draw[blue,-] (E2.north west) -- (E3.south east) node[above right, align=center, midway]{\tiny R}; \draw[-] (C2.east) -- (E2.west); \begin{scope}[shift={(-170pt,0pt)}] \draw(0,0) node[anchor=center] (L C2) {}; \draw(40pt,0pt) node[anchor=center] (L E2) {}; \draw(40pt,15pt) node[anchor=center] (L E3) {}; \draw(80pt,0pt) node[anchor=center] (L E4) {}; \node[below] at (L C2.south) {$\scriptstyle -6$}; \node[below] at (L E2.south) {$\scriptstyle -2$}; \node[below] at (L E4.south) {$\scriptstyle -2$}; \node[above] at (L E3.north) {$\scriptstyle -1$}; \fill[red] (L C2) circle (1.5pt); \fill[blue] (L E2) circle (1.5pt); \fill[red] (L E3) circle (1.5pt); \fill[black] (L E4) circle (1.5pt); \draw[-] (L C2.north east) -- (L E3.south west) node[above, align=center, midway]{\tiny L'}; \draw[-] (L E2.east) -- (L E4.west); \draw[-] (L C2.east) -- (L E2.west); \draw[-] (L E4.north west) -- (L E3.south east) node[above, align=center, midway]{\tiny R'}; \end{scope} \begin{scope}[shift={(130pt,0pt)}] \draw(0,0) node[anchor=center] (R C2) {}; \draw(40pt,0pt) node[anchor=center] (R E2) {}; \draw(40pt,15pt) node[anchor=center] (R E3) {}; \draw(80pt,0pt) node[anchor=center] (R E4) {}; \node[below] at (R C2.south) {$\scriptstyle -2$}; \node[below] at (R E2.south) {$\scriptstyle -5$}; \node[below] at (R E4.south) {$\scriptstyle -3$}; \node[above] at (R E3.north) {$\scriptstyle -1$}; \fill[black] (R C2) circle (1.5pt); \fill[red] (R E2) circle (1.5pt); \fill[blue] (R E3) circle (1.5pt); \fill[blue] (R E4) circle (1.5pt); \draw[-] (R C2.north east) -- (R E3.south west) node[above, align=center, midway]{\tiny L'}; \draw[-] (R E2.east) -- (R E4.west); \draw[-] (R C2.east) -- (R E2.west); \draw[-] (R E4.north west) -- (R E3.south east) node[above, align=center, midway]{\tiny R'}; \end{scope} \draw [->,decorate,decoration={snake,amplitude=1pt,segment length=5pt, post length=2pt}] (-15pt,5pt) -- (-75pt, 5pt) node[below, align=center, midway]{$\scriptstyle \op{Bl}_{\rm L}$}; \draw [->,decorate,decoration={snake,amplitude=1pt,segment length=5pt, post length=2pt}] (55pt,5pt) -- (115pt, 5pt) node[below, align=center, midway]{$\scriptstyle \op{Bl}_{\rm R}$}; \end{tikzpicture} \] In general, if one has a fiber with configuration\!\! \raisebox{-11pt}[0pt][13pt]{ \begin{tikzpicture} \draw(0,0) node[anchor=center] (E1) {}; \draw(30pt,0pt) node[anchor=center] (E2) {}; \draw(60pt,0pt) node[anchor=center, inner sep=10pt] (E3) {}; \draw(90pt,0pt) node[anchor=center] (E4) {}; \fill[black] (E1) circle (1.5pt); \fill[black] (E2) circle (1.5pt); \draw (E3) node[anchor=center]{$\cdots$}; \fill[black] (E4) circle (1.5pt); \node[black,below,shift=(90:2pt)] at (E1.south) {$\scriptscriptstyle -k_1$}; \node[black,below,shift=(90:2pt)] at (E2.south) {$\scriptscriptstyle -k_2$}; \node[black,below,shift=(90:2pt)] at (E4.south) {$\scriptscriptstyle -k_r$}; \draw[-] (E1.east) -- (E2.west); \draw[-] (E2.east) -- (E3.west); \draw[-] (E3.east) -- (E4.west); \end{tikzpicture} }\!\!\!\!, then blowing up at L yields\!\!\!\! \raisebox{-11pt}[0pt][13pt]{ \begin{tikzpicture} \draw(0,0) node[anchor=center] (E1) {}; \draw(25pt,0pt) node[anchor=center] (E2) {}; \draw(50pt,0pt) node[anchor=center, inner sep=10pt] (E3) {}; \draw(75pt,0pt) node[anchor=center] (E4) {}; \draw(100pt,0pt) node[anchor=center] (E5) {}; \fill[black] (E1) circle (1.5pt); \fill[black] (E2) circle (1.5pt); \draw (E3) node[anchor=center]{$\cdots$}; \fill[black] (E4) circle (1.5pt); \fill[black] (E5) circle (1.5pt); \node[black,below,shift=(90:2pt)] at (E1.south) {$\scriptscriptstyle -(k_1+1)$}; \node[black,below,shift=(90:2pt)] at (E2.south) {$\scriptscriptstyle -k_2$}; \node[black,below,shift=(90:2pt)] at (E4.south) {$\scriptscriptstyle -k_r$}; \node[black,below,shift=(90:2pt)] at (E5.south) {$\scriptscriptstyle -2$}; \draw[-] (E1.east) -- (E2.west); \draw[-] (E2.east) -- (E3.west); \draw[-] (E3.east) -- (E4.west); \draw[-] (E4.east) -- (E5.west); \end{tikzpicture} }\!\!. Similarly, the blowing up at R yields\!\! \raisebox{-12pt}[0pt][13pt]{ \begin{tikzpicture} \draw(0,0) node[anchor=center] (E1) {}; \draw(25pt,0pt) node[anchor=center] (E2) {}; \draw(50pt,0pt) node[anchor=center, inner sep=10pt] (E3) {}; \draw(75pt,0pt) node[anchor=center] (E4) {}; \draw(108pt,0pt) node[anchor=center] (E5) {}; \fill[black] (E1) circle (1.5pt); \fill[black] (E2) circle (1.5pt); \draw (E3) node[anchor=center]{$\cdots$}; \fill[black] (E4) circle (1.5pt); \fill[black] (E5) circle (1.5pt); \node[black,below,shift=(90:2pt)] at (E1.south) {$\scriptscriptstyle -2$}; \node[black,below,shift=(90:2pt)] at (E2.south) {$\scriptscriptstyle -k_1$}; \node[black,below,shift=(90:2pt)] at (E4.south) {$\scriptscriptstyle -k_{r-1}$}; \node[black,below,shift=(90:2pt)] at (E5.south) {$\scriptscriptstyle -(k_r+1)$}; \draw[-] (E1.east) -- (E2.west); \draw[-] (E2.east) -- (E3.west); \draw[-] (E3.east) -- (E4.west); \draw[-] (E4.east) -- (E5.west); \end{tikzpicture} }\!\!\!\!\!\!\!.\ \ These present all possible resolution graphs of $T_1$-singularities\,\cite[Theorem~17]{Manetti:NormalDegenerationOfPlane}. Let $Y$ be the surface obtained after successive blow ups on the second special fiber $C_2'' \cup E_2' \cup E_3$, so that the resulting fiber contains the resolution graph of a $T_1$-singularity of type $\bigl(0 \in \A^2 / \frac{1}{n^2}(1, na-1)\bigr)$ for some odd integer $n$ and an integer $a$ with $\op{gcd}(n,a)=1$. To simplify notations, we will not distinguish the divisors and their proper transforms unless there arise ambiguities. For instance, the proper transform of $C_1 \in \Pic \P^2$ in $Y$ will be denoted by $C_1$, and so on. We fix this configuration of $Y$ throughout this paper, so it is appropriate to give a summary here: \begin{enumerate}[label=\normalfont(\arabic{enumi})] \item the $(-1)$-curves $F_1,\ldots,F_9$ that are proper transforms of the exceptional fibers of $\op{Bl}_9 \P^2 \to \P^2$; \item the $(-4)$-curve $C_1$ and the $(-1)$-curve $E_1$ arising from the blowing up of the first nodal curve; \item the negative curves $C_2,\,E_2,\,\ldots,\,E_r,\,E_{r+1}$, where $E_{r+1}^2 = -1$ and $C_2,\,E_2,\,\ldots,\,E_r$ form a resolution graph of a $T_1$-singularity of type $\bigl(0 \in \A^2 \big/\frac{1}{n^2}(1,na-1)\bigr)$. \end{enumerate}\vskip+5pt \begin{figure} \caption{Configuration of the surface $Y$. The sequence $E_{i_k} \label{fig:Configuration_General} \end{figure} Let $C_0$ be the general fiber of the elliptic fibration $Y \to \P^1$. The fibers are linearly equivalent, thus
4,064
67,457
en
train
0.68.3
\raisebox{-12pt}[0pt][13pt]{ \begin{tikzpicture} \draw(0,0) node[anchor=center] (E1) {}; \draw(25pt,0pt) node[anchor=center] (E2) {}; \draw(50pt,0pt) node[anchor=center, inner sep=10pt] (E3) {}; \draw(75pt,0pt) node[anchor=center] (E4) {}; \draw(108pt,0pt) node[anchor=center] (E5) {}; \fill[black] (E1) circle (1.5pt); \fill[black] (E2) circle (1.5pt); \draw (E3) node[anchor=center]{$\cdots$}; \fill[black] (E4) circle (1.5pt); \fill[black] (E5) circle (1.5pt); \node[black,below,shift=(90:2pt)] at (E1.south) {$\scriptscriptstyle -2$}; \node[black,below,shift=(90:2pt)] at (E2.south) {$\scriptscriptstyle -k_1$}; \node[black,below,shift=(90:2pt)] at (E4.south) {$\scriptscriptstyle -k_{r-1}$}; \node[black,below,shift=(90:2pt)] at (E5.south) {$\scriptscriptstyle -(k_r+1)$}; \draw[-] (E1.east) -- (E2.west); \draw[-] (E2.east) -- (E3.west); \draw[-] (E3.east) -- (E4.west); \draw[-] (E4.east) -- (E5.west); \end{tikzpicture} }\!\!\!\!\!\!\!.\ \ These present all possible resolution graphs of $T_1$-singularities\,\cite[Theorem~17]{Manetti:NormalDegenerationOfPlane}. Let $Y$ be the surface obtained after successive blow ups on the second special fiber $C_2'' \cup E_2' \cup E_3$, so that the resulting fiber contains the resolution graph of a $T_1$-singularity of type $\bigl(0 \in \A^2 / \frac{1}{n^2}(1, na-1)\bigr)$ for some odd integer $n$ and an integer $a$ with $\op{gcd}(n,a)=1$. To simplify notations, we will not distinguish the divisors and their proper transforms unless there arise ambiguities. For instance, the proper transform of $C_1 \in \Pic \P^2$ in $Y$ will be denoted by $C_1$, and so on. We fix this configuration of $Y$ throughout this paper, so it is appropriate to give a summary here: \begin{enumerate}[label=\normalfont(\arabic{enumi})] \item the $(-1)$-curves $F_1,\ldots,F_9$ that are proper transforms of the exceptional fibers of $\op{Bl}_9 \P^2 \to \P^2$; \item the $(-4)$-curve $C_1$ and the $(-1)$-curve $E_1$ arising from the blowing up of the first nodal curve; \item the negative curves $C_2,\,E_2,\,\ldots,\,E_r,\,E_{r+1}$, where $E_{r+1}^2 = -1$ and $C_2,\,E_2,\,\ldots,\,E_r$ form a resolution graph of a $T_1$-singularity of type $\bigl(0 \in \A^2 \big/\frac{1}{n^2}(1,na-1)\bigr)$. \end{enumerate}\vskip+5pt \begin{figure} \caption{Configuration of the surface $Y$. The sequence $E_{i_k} \label{fig:Configuration_General} \end{figure} Let $C_0$ be the general fiber of the elliptic fibration $Y \to \P^1$. The fibers are linearly equivalent, thus \begin{align} C_0 &= C_1 + 2E_1 \nonumber \\ &= C_2 + a_2 E_2 + a_3 E_3 + \ldots + a_{r+1} E_{r+1}, \label{eq:SpecialFiber} \end{align} where $a_2,\ldots,a_{r+1}$ are the integers determined by the system of linear equations \begin{equation}\label{eq:EquationOnFiber} (C_2.E_i) + \sum_{j=2}^{r+1} a_j (E_j.E_i) = 0,\quad i=2,\ldots, r+1. \end{equation} Note that the values $(C_2.E_i)$, $(E_j.E_i)$ are explicitly given in the configuration\,(Figure~\ref{fig:Configuration_General}). The matrix $\bigl( (E_j.E_i) \bigr)_{2\leq i,j \leq r}$ is negative definite\,\cite{Mumford:TopologyOfNormalSurfaceSingularity}, and the number $a_{r+1}$ is determined by Proposition \ref{prop:SingIndexAndFiberCoefficients}, hence the system (\ref{eq:EquationOnFiber}) has a unique solution. \begin{lemma}\label{lem:CanonicalofY} In the above situation, the following formula holds: \[ K_Y = E_1 - C_2 - E_2 - \ldots - E_{r+1}. \] \end{lemma} \begin{proof} The proof proceeds by an induction on $r$. The minimum value of $r$ is two, the case in which $C_2\cup E_2$ from the chain \raisebox{0pt}[15pt][0pt]{ \begin{tikzpicture} \draw(0,0) node[anchor=center] (E1) {}; \draw(20pt,0pt) node[anchor=center] (E2) {};; \fill[black] (E1) circle (1.5pt); \fill[black] (E2) circle (1.5pt); \node[black,shift=(90:2pt)] at (E1.north) {$\scriptscriptstyle -5$}; \node[black,shift=(90:2pt)] at (E2.north) {$\scriptscriptstyle -2$}; \draw[-] (E1.east) -- (E2.west); \end{tikzpicture} }\!\!. Let $H \in \Pic \P^2$ be a hyperplane divisor, and let $p \colon Y \to \P^2$ be the blowing down morphism. Then \[ K_Y = p^* K_{\P^2} + F_1 + \ldots + F_9 + E_1 + d_2 E_2 + d_3E_3 \] for some $d_2,d_3 \in \Z$. Since any cubic curve in $\P^2$ is linearly equivalent to $3 H$, \begin{align*} p^* ( 3H ) &= C_0 + F_1 + \ldots + F_9 \\ &= (C_2 + a_2 E_2 +a_3 E_3) + F_1 + \ldots + F_9 \end{align*} where $a_2,a_3$ are the integers introduced in (\ref{eq:SpecialFiber}). Hence, \begin{align*} K_Y &= p^* (-3H) + F_1 + \ldots + F_9 + E_1 + d_2 E_2 + d_3 E_3 \\ &= E_1 - C_2 + (d_2-a_2)E_2 + (d_3-a_3)E_3. \end{align*} Here, the genus formula shows that $K_Y = E_1 - C_2 - E_2 - E_3$. Assume the induction hypothesis that $K_Y = E_1 - C_2 - E_2 - \ldots - E_{r+1}$. Let $D \in \{C_2,E_2,\ldots,E_r\}$ be a divisor intersects $E_{r+1}$, and let $\varphi \colon \widetilde Y \to Y$ be the blowing up at the point $D \cap E_{r+1}$. Then, \[ K_{\widetilde Y} = \varphi^* K_Y + \widetilde E_{r+2}, \] where $\widetilde E_{r+2}$ is the exceptional divisor of $\varphi$. Let $\widetilde C_2, \widetilde E_1, \ldots, \widetilde E_{r+1}$ denote the proper transforms of the corresponding divisors. Then, $\varphi^*$ maps $D$ to $(\widetilde D + \widetilde E_{r+2})$, maps $E_{r+1}$ to $(\widetilde E_{r+1} + \widetilde E_{r+2})$, and maps the other divisors to their proper transforms. It follows that \begin{align*} \varphi^*K_Y &= \varphi^*(E_1 - C_2 - \ldots - E_{r+1}) \\ &= \widetilde E_1 - \widetilde C_2 - \ldots - \widetilde E_{r+1} - 2 \widetilde E_{r+2}. \end{align*} Hence, $K_{\widetilde Y} = \varphi^* K_Y + \widetilde E_{r+2} = \widetilde E_1 - \widetilde C_2 - \widetilde E_2 - \ldots - \widetilde E_{r+2}$. \end{proof}
2,593
67,457
en
train
0.68.4
\begin{proposition}\label{prop:SingularSurfaceX} Let $\pi \colon Y \to X$ be the contraction of the curves $C_1,\,C_2,\, E_2,\,\ldots,\, E_r$. Let $P_1 = \pi(C_1)$ and $P_2 = \pi(C_2 \cup E_2 \cup \ldots \cup E_r)$ be the singularities of types $\bigl( 0 \in \A^2 \big/ \frac{1}{4}(1,1)\bigr)$ and $\bigl( 0 \in \A^2 \big/ \frac{1}{n^2}(1,na-1)\bigr)$, respectively. Then the following properties of $X$ hold: \begin{enumerate}[ref=(\alph{enumi})] \item \label{item:SingularSurfaceX_Cohomologies}$X$ is a projective normal surface with $H^1(\mathcal O_X) = H^2(\mathcal O_X)=0$; \item $\pi^*K_X \equiv (\frac 12 - \frac 1n)C_0 \equiv C_0 - \frac{1}{2} C_0 - \frac{1}{n} C_0$ as $\Q$-divisors. \end{enumerate} In particular, $K_X^2 = 0$, $K_X$ is nef, but $K_X$ is not numerically trivial. \end{proposition} \begin{proof}\ \begin{enumerate} \item Since the singularities of $X$ are rational, $R^q \pi_* \mathcal O_Y = 0$ for $q > 0$. The Leray spectral sequence \[ E_2^{p,q} = H^p( X, R^q\pi_* \mathcal O_Y ) \Rightarrow H^{p+q}(Y,\mathcal O_Y) \] says that $H^p(Y,\mathcal O_Y) \simeq H^p (X, \pi_* \mathcal O_Y) = H^p(X,\mathcal O_X)$ for $p > 0$. The surface $Y$ is obtained from $\P^2$ by a finite sequence of blow ups, hence $H^1(Y,\mathcal O_Y) = H^2(Y,\mathcal O_Y) =0$. One can immediately verify the hypotheses of Artin's criterion for contractibility\,\cite[Theorem~2.3]{Artin:Contractibility} hold, thus $X$ is projective. \item Since the morphism $\pi$ contracts $C_1,\,C_2,\,E_2,\,\ldots,\,E_r$, we may write \[ \pi^* K_X \equiv K_Y + c_1 C_1 + c_2 C_2 + b_2 E_2 + \ldots + b_r E_r, \] for $c_1,c_2,b_2,\ldots,b_r \in \Q$(the coefficients may not be integral since $X$ is singular). It is easy to see that $c_1 = \frac 12$. By Lemma~\ref{lem:CanonicalofY}, \[ \pi^* K_X \equiv \frac{1}{2}C_0 + (c_2- 1)C_2 + (b_2 -1)E_2+ \ldots + (b_r-1) E_r - E_{r+1}. \] Both $\pi^*K_X$ and $C_0$ do not intersect with $C_2,E_2,\ldots,E_r$. Thus, we get \begin{equation}\label{eq:Aux1} \left\{ \begin{array}{l@{}l} 0 &{}= (1-c_2)(C_2^2) + \sum_{j =2}^r (1-b_j)(E_j.C_2) + (E_{r+1}.C_2) \\ 0 &{}= (1-c_2)(C_2.E_i) + \sum_{j=2}^r (1-b_j)(E_j.E_i) + (E_{r+1}.E_i),\quad \text{for\ }i=2,\ldots,r. \end{array} \right. \end{equation} After divided by $a_{r+1}$, (\ref{eq:EquationOnFiber}) becomes \[ 0=\frac{1}{a_{r+1}} (C_2. E_i) + \sum_{j=2}^r \frac{a_j}{a_{r+1}} (E_j.E_i) + (E_{r+1}.E_i),\quad \text{for\ }i=2,\ldots,r. \] In addition, the equation $( C_2 + a_2 E_2 + \ldots + a_{r+1} E_r \mathbin. C_2 ) = (C_0 . C_2) = 0 $ gives rise to \[ 0=\frac{1}{a_{r+1}} (C_2^2) + \sum_{j=2}^r \frac{a_j}{a_{r+1}} (E_j.C_2) + (E_{r+1}.C_2). \] Comparing these equations with (\ref{eq:Aux1}), it is easy to see that the ordered tuples \[ (1-c_2,\ 1-b_2,\ \ldots,\ 1-b_r)\quad\text{and}\quad (1/a_{r+1},\ a_2/a_{r+1},\ \ldots,\ a_r / a_{r+1}) \] fit into the same system of linear equations. Since the intersection matrix of the divisors $(C_2,E_2,\ldots,E_r)$ is negative definite, \[ (1-c_2,\, 1-b_2,\, \ldots,\, 1-b_r) = (1/a_{r+1},\ a_2/a_{r+1},\ \ldots,\ a_r / a_{r+1}). \] It follows that \begin{align*} \pi^* K_X &\equiv \frac{1}{2}C_0 + (c_2 -1 )C_2 + (b_2 - 1)E_2 + \ldots + (b_r -1) E_r - E_{r+1} \\ &\equiv \frac{1}{2}C_0 - \frac{1}{a_{r+1}} \bigl( C_2 + a_2 E_2 + \ldots + a_{r+1} E_{r+1} \bigr) \\ &\equiv \Bigl( \frac{1}{2} - \frac{1}{a_{r+1}} \Bigr) C_0. \end{align*} It remains to prove $a_{n+1} = n$. This directly follows from Proposition~\ref{prop:SingIndexAndFiberCoefficients}. It is immediate to see that $C_0^2 = 0$, $C_0$ is nef, and $C_0$ is not numerically trivial. The same properties are true for $\pi^*K_X$. \qedhere \end{enumerate} \end{proof} \begin{proposition}\label{prop:SingIndexAndFiberCoefficients} Suppose that $C_2 \cup E_2 \cup \ldots \cup E_r$ has the configuration\!\! \raisebox{-11pt}[0pt][13pt]{ \begin{tikzpicture} \draw(0,0) node[anchor=center] (E1) {}; \draw(30pt,0pt) node[anchor=center] (E2) {}; \draw(60pt,0pt) node[anchor=center, inner sep=10pt] (E3) {}; \draw(90pt,0pt) node[anchor=center] (E4) {}; \fill[black] (E1) circle (1.5pt); \fill[black] (E2) circle (1.5pt); \draw (E3) node[anchor=center]{$\cdots$}; \fill[black] (E4) circle (1.5pt); \node[black,below,shift=(90:2pt)] at (E1.south) {$\scriptscriptstyle -k_1$}; \node[black,below,shift=(90:2pt)] at (E2.south) {$\scriptscriptstyle -k_2$}; \node[black,below,shift=(90:2pt)] at (E4.south) {$\scriptscriptstyle -k_r$}; \draw[-] (E1.east) -- (E2.west); \draw[-] (E2.east) -- (E3.west); \draw[-] (E3.east) -- (E4.west); \end{tikzpicture} }\!\!, so that it contracts to give a $T_1$-singularity of type $\bigl( 0 \in \A^2 \big/ \frac{1}{n^2}(1,na-1)\bigr)$. Then, in the expression \[ C_2 + a_2 E_2 + \ldots + a_{r+1} E_{r+1} \] of the fiber (\ref{eq:SpecialFiber}), the coefficient of the $(-k_1)$-curve is $a$, and the coefficient of the $(-k_r)$-curve is $(n-a)$. Furthermore, $a_{r+1}$ equals to the sum of these two coefficients, hence $a_{r+1} = n$. \end{proposition} \begin{proof} The proof proceeds by an induction on $r$. The case $r = 2$ is trivial. Indeed, a simple computations shows that $n = 3$, $a = 1$, and $a_2 = 2$, $a_3= 3$. To make notations simpler, we reindex $\{C_2,\, E_2,\,\ldots,\, E_{r+1}\}$ as follows: \[ (G_1,\,G_2,\,\ldots,\,G_{r+1}) = (E_{i_k},\,E_{i_{k-1}},\,\ldots,\,E_{i_1},\,C_2,\,E_{j_1},\,\ldots,\,E_{j_\ell},\,E_{r+1}).\hskip-35pt\tag{Figure~\ref{fig:Configuration_General}} \] By the induction hypothesis, we may assume \[ C_2 + a_2E_2 + \ldots + a_{r+1} E_{r+1} = a G_1 + \ldots + (n-a) G_r + n G_{r+1}. \] Let $\varphi_1 \colon \widetilde Y \to Y$ be the blow up at the point $G_{r+1} \cap G_1$, let $\widetilde G_i$\,($i=1,\ldots,r+1$) be the proper transform of $G_i$, and let $\widetilde G_{r+2}$ be the exceptional divisor. The $(-1)$-curve $\widetilde G_{r+2}$ meets $\widetilde G_1$ and $\widetilde G_{r+1}$ transversally, so \begin{align*} \varphi^*( aG_1 + \ldots + nG_{r+1}) &= a ( \widetilde G_1 + \widetilde G_{r+2}) + g_2 \widetilde G_2 + \ldots + (n-a) \widetilde G_r + n( \widetilde G_{r+1} + \widetilde G_{r+2}) \\ &= a \widetilde G_1 + g_2\widetilde G_2 + \ldots + (n-a) \widetilde G_r + n\widetilde G_{r+1} + (n+a) \widetilde G_{r+2}. \end{align*} It is well-known that the contraction of $\widetilde G_1, \ldots, \widetilde G_{r+1} \subset \widetilde Y$ produces a cyclic quotient singularity of type \[ \Bigl( 0 \in \A^2 \Big/ \frac{1}{(n+a)^2}(1,(n+a)n-1) \Bigr). \] This proves the statement for the chain $\widetilde G_1 \cup \ldots \cup \widetilde G_{r+2}$, so we are done by the induction. The same argument also works if one performs the blow up $\varphi_2 \colon \widetilde Y' \to Y$ at the point $G_{r+1} \cap G_r$. \end{proof} Now we want to obtain a smooth surface via a $\Q$-Gorenstein smoothing of $X$. It is well-known that $T_1$-singularities admit local $\Q$-Gorenstein smoothings, thus we have to verify that: \begin{enumerate} \item every formal deformation of $X$ is algebraizable; \item every local deformation of $X$ can be globalized. \end{enumerate} The answer for (a) is an immediate consequence of Grothendieck's existence theorem\,\cite[Example~21.2.5]{Hartshorne:DeformationTheory} since $H^2(\mathcal O_X)=0$. The next lemma verifies (b). \begin{lemma}\label{lem:NoObstruction} Let $Y$ be the nonsingular rational elliptic surface introduced above, and let $\mathcal T_Y$ be the tangent sheaf of $Y$. Then, \[ H^2(Y, \mathcal T_Y( - C_1 - C_2 - E_2 - \ldots - E_r ) ) = 0. \] In particular, $H^2(X,\mathcal T_X) = 0$\,(see \cite[Theorem~2]{LeePark:SimplyConnected}). \end{lemma} \begin{proof} The proof is not very different from \cite[\textsection4, Example~2]{LeePark:SimplyConnected}. The main claim is \[ H^0(Y, \Omega_Y^1(K_Y + C_1 + C_2 + E_2 + \ldots + E_r)) =0. \] By Lemma~\ref{lem:CanonicalofY} and equation~(\ref{eq:SpecialFiber}), \[ K_Y + C_1 + C_2 + E_2 + \ldots + E_r = C_0 - E_1 - E_{r+1}. \] Then, $h^0(Y,\Omega_Y^1(C_0 - E_1 - E_{r+1})) \leq h^0(Y,\Omega_Y^1(C_0)) = h^0(Y',\Omega_{Y'}^1(C_0'))$ where $Y'= \op{Bl}_9\P^2$, and $h^0(Y',\Omega_{Y'}^1(C_0')) =0$ by \cite[\textsection4, Lemma~2]{LeePark:SimplyConnected}. The result directly follows from the Serre duality. \end{proof}
3,993
67,457
en
train
0.68.5
Now we want to obtain a smooth surface via a $\Q$-Gorenstein smoothing of $X$. It is well-known that $T_1$-singularities admit local $\Q$-Gorenstein smoothings, thus we have to verify that: \begin{enumerate} \item every formal deformation of $X$ is algebraizable; \item every local deformation of $X$ can be globalized. \end{enumerate} The answer for (a) is an immediate consequence of Grothendieck's existence theorem\,\cite[Example~21.2.5]{Hartshorne:DeformationTheory} since $H^2(\mathcal O_X)=0$. The next lemma verifies (b). \begin{lemma}\label{lem:NoObstruction} Let $Y$ be the nonsingular rational elliptic surface introduced above, and let $\mathcal T_Y$ be the tangent sheaf of $Y$. Then, \[ H^2(Y, \mathcal T_Y( - C_1 - C_2 - E_2 - \ldots - E_r ) ) = 0. \] In particular, $H^2(X,\mathcal T_X) = 0$\,(see \cite[Theorem~2]{LeePark:SimplyConnected}). \end{lemma} \begin{proof} The proof is not very different from \cite[\textsection4, Example~2]{LeePark:SimplyConnected}. The main claim is \[ H^0(Y, \Omega_Y^1(K_Y + C_1 + C_2 + E_2 + \ldots + E_r)) =0. \] By Lemma~\ref{lem:CanonicalofY} and equation~(\ref{eq:SpecialFiber}), \[ K_Y + C_1 + C_2 + E_2 + \ldots + E_r = C_0 - E_1 - E_{r+1}. \] Then, $h^0(Y,\Omega_Y^1(C_0 - E_1 - E_{r+1})) \leq h^0(Y,\Omega_Y^1(C_0)) = h^0(Y',\Omega_{Y'}^1(C_0'))$ where $Y'= \op{Bl}_9\P^2$, and $h^0(Y',\Omega_{Y'}^1(C_0')) =0$ by \cite[\textsection4, Lemma~2]{LeePark:SimplyConnected}. The result directly follows from the Serre duality. \end{proof} We have shown that the surface $X$ admits a $\Q$-Gorenstein smoothing $\mathcal X \to T$. The next aim is to show that the general fiber $X^\gen := \mathcal X_t$ is a Dolgachev surface of type $(2,n)$. \begin{proposition}\label{prop:CohomologyComparison_YtoX} Let $X$ be a projective normal surface with only rational singularities, let $\pi \colon Y \to X$ be a resolution of singularities, and let $E_1,\ldots,E_r$ be the exceptional divisors. If $D$ is a divisor on $Y$ such that $(D.E_i)=0$ for all $i=1,\ldots,r$, then \[ H^p(Y,D) \simeq H^p(X,\pi_*D) \] for all $p \geq 0$. \end{proposition} \begin{proof} Since the singularities of $X$ are rational, each $E_i$ is a smooth rational curve. The assumption on $D$ in the statement implies that $\pi_*D$ is Cartier\,\cite[Theorem~12.1]{Lipman:RationalSings}, and $\pi^*\mathcal O_X(\pi_*D) = \mathcal O_Y(D)$. By the projection formula, $R^p\pi_*\mathcal O_Y(D) \simeq R^p \pi_*( \mathcal O_Y \otimes \pi^* \mathcal O_X(\pi_*D) ) \simeq (R^p \pi_* \mathcal O_Y) \otimes \mathcal O_X(\pi_*D)$. Since $X$ is normal and has only rational singularities, \[ R^p\pi_* \mathcal O_Y = \left\{ \begin{array}{ll} \mathcal O_X & \text{if } p=0\\ 0 & \text{if } p > 0. \end{array} \right. \] Now, the claim is an immediate consequence of the Leray spectral sequence \[ E_2^{p,q} = H^p(X,\, R^q\pi_*\mathcal O_Y \otimes \mathcal O_X(\pi_*D) ) \Rightarrow H^{p+q}(Y,\, \mathcal O_Y(D)). \qedhere \] \end{proof} \begin{lemma}\label{lem:Cohomologies_ofGeneralFiber_inY} Let $\pi \colon Y \to X$ be the contraction defined in Proposition~\ref{prop:SingularSurfaceX}. Then, \[ h^0(X,\pi_*C_0) = 2,\quad h^1(X,\pi_*C_0)=1,\ \text{and}\quad h^2(X,\pi_*C_0)=0. \] \end{lemma} \begin{proof} It is easy to see that $(C_0.C_1) = (C_0.C_2) = (C_0.E_2) =\ldots = (C_0.E_r) = 0$. Hence by Proposition~\ref{prop:CohomologyComparison_YtoX}, it suffices to compute $h^p(Y,C_0)$. Since $C_0^2 = (K_Y . C_0)=0$, the Riemann-Roch formula shows $\chi(C_0)=1$. By Serre duality, $h^2(C_0) = h^0(K_Y - C_0)$. In the short exact sequence \[ 0 \to \mathcal O_Y(K_Y - C_0 - E_1) \to \mathcal O_Y(K_Y -C_0) \to \mathcal O_{E_1} \otimes \mathcal O_Y(K_Y - C_0)\to 0, \] we find that $H^0(\mathcal O_{E_1} \otimes \mathcal O_Y(K_Y - C)) = 0$ since $(K_Y - C_0 \mathbin . E_1) = -1$. It follows that \[ h^0(K_Y-C_0) = h^0(K_Y-C_0-E_1), \] but $K_Y - C_0 - E_1 = - C_0 - C_2 - E_2 - \ldots - E_{r+1}$ by Lemma~\ref{lem:CanonicalofY}. Hence $h^2(C_0)=0$. Since the complete linear system $\lvert C_0 \rvert$ defines the elliptic fibration $Y \to \P^1$, $h^0(C_0) = 2$. Furthermore, $h^1(C_0)=1$ follows from $h^0(C_0)=2$, $h^2(C_0)=0$, and $\chi(C_0)=1$. \end{proof} The following proposition, due to Manetti\,\cite{Manetti:NormalDegenerationOfPlane}, is a key ingredient of the proof of Theorem~\ref{thm:SmoothingX} \begin{proposition}[{\cite[Lemma~2]{Manetti:NormalDegenerationOfPlane}}]\label{prop:Manetti_PicLemma} Let $\mathcal X \to ( 0 \in T)$ be a smoothing of a normal surface $X$ with $H^1(\mathcal O_X) = H^2(\mathcal O_X)=0$. Then for every $t \in T$, the natural restriction map of the second cohomology groups $H^2(\mathcal X,\Z) \to H^2(\mathcal X_t,\Z)$ induces an injection $\Pic \mathcal X \to \Pic \mathcal X_t$. Furthermore, the restriction to the central fiber $\Pic \mathcal X \to \Pic X$ is an isomorphism. \end{proposition} \begin{theorem}\label{thm:SmoothingX} Let $X$ be the projective normal surface defined in Proposition~\ref{prop:SingularSurfaceX}, and let $\varphi \colon \mathcal X \to (0 \in T)$ be a one parameter $\Q$-Gorenstein smoothing of $X$ over a smooth curve germ $(0 \in T)$. For a general point $0 \neq t_0 \in T$, the fiber $X^\gen := \mathcal X_{t_0}$ satisfies the following: \begin{enumerate} \item $p_g(X^\gen) = q(X^\gen) = 0$; \item $X^\gen$ is a simply connected, minimal, nonsingular surface with Kodaira dimension $1$; \item there exists an elliptic fibration $f^\gen \colon X^\gen \to \P^1$ such that $K_{X^\gen} \equiv C_0^\gen - \frac{1}{2} C_0^\gen - \frac{1}{n} C_0^\gen$, where $C_0^\gen$ is the general fiber of $f^\gen$. In particular, $X^\gen$ is isomorphic to the Dolgachev surface of type $(2,n)$. \end{enumerate} \end{theorem} \begin{proof}\ \begin{enumerate}[ref=(\normalfont\alph{enumi})] \item This follows from Proposition~\ref{prop:SingularSurfaceX}\ref{item:SingularSurfaceX_Cohomologies} and the upper-semicontinuity of $h^p$. \item Shrinking $(0 \in T)$ if necessary, we may assume that $X^\gen$ is simply connected\,\cite[p.~499]{LeePark:SimplyConnected}, and that $K_{X^\gen}$ is nef\,\cite[\textsection5.d]{Nakayama:ZariskiDecomposition}. If $K_{X^\gen}$ is numerically trivial, then $X^\gen$ must be an Enriques surface by the classification theory of surfaces. This violates the simple connectivity of $X^\gen$. It follows that $K_{X^\gen}$ is not numerically trivial, and the Kodaira dimension of $X^\gen$ is $1$. \item Since the divisor $\pi_*C_0$ is not supported on the singular points of $X$, $\pi_* C_0 \in \Pic X$. By Proposition~\ref{prop:Manetti_PicLemma}, $\Pic X \simeq \Pic \mathcal X \hookrightarrow \Pic X^\gen$. Let $C_0^\gen\in \Pic X^\gen$ be the image of $\pi_*C_0$ under this correspondence. By \cite[Theorem~4.2]{Kawamata:Moderate}, there exists a smooth complex surface $B$ such that the morphism $\varphi$ factors through $g \colon \mathcal X \to B$ and the general fiber of $g$ is an elliptic curves. In particular, the complete linear system $\lvert C_0^\gen\rvert$ defines the elliptic fibration $f^\gen \colon X^\gen \to \P^1$. Since $\mathcal X / (0 \in T)$ is a $\Q$-Gorenstein deformation, the map $\Pic X \hookrightarrow \Pic X^\gen$ in Proposition~\ref{prop:Manetti_PicLemma} maps $2nK_X - (n-2) \pi_* C_0$ to $2nK_{X^\gen} - (n-2) C_0^\gen$. Furthermore, $2nK_X - (n-2)\pi_*C_0 \in \Pic X$ is zero, so \[ K_{X^\gen} \equiv C_0^\gen - \frac{1}{2}C_0^\gen - \frac{1}{n}C_0^\gen. \] By \cite[Chapter~2]{Dolgachev:AlgebraicSurfaces}, every minimal simply connected nonsingular surface with $p_g=q=0$ and of Kodaira dimension $1$ has exactly two multiple fibers with coprime multiplicities. Thus, there exist coprime integers $q > p > 0$ such that $X^\gen \simeq X_{p,q}$ where $X_{p,q}$ is a Dolgachev surface of type $(p,q)$. The canonical bundle formula says that $K_{X_{p,q}} \equiv C_0^\gen - \frac 1p C_0^\gen - \frac 1q C_0^\gen$. Since $X^\gen \simeq X_{p,q}$, this leads to the equality \[ \frac 12 + \frac 1n = \frac 1p + \frac 1q. \] Assume $2 < p < q$. Then, $\frac 12 < \frac 12 + \frac 1n = \frac 1p + \frac 1q \leq \frac 13 + \frac 1q$. Hence, $q < 6$. Only the possible candidates are $(p,q,n) = (3,4,12)$, $(3,5,30)$, but all of these cases violate $\op{gcd}(2,n) = 1$. It follows that $p=2$ and $q = n$.\qedhere \end{enumerate} \end{proof}
3,540
67,457
en
train
0.68.6
\begin{remark} Theorem~\ref{thm:SmoothingX} generalizes to the construction of Dolgachev surfaces of type $(m,n)$ for any coprime integers $n>m>0$. Indeed, we shall describe the multiple fiber of multiplicity $n$ associated to the Weil divisor $\pi_* E_{r+1}$. The precise meaning of this sentence will be explained in the next section\,(see Example~\ref{eg:MultipleFibers}). If we perform more blow ups to the $C_1\cup E_1$-fiber so that $X$ contains a $T_1$-singularity of type $\bigl( 0 \in \A^2 \big / \frac{1}{m^2}(1,mb-1)\bigr)$, then the surface $X^\gen$ has two multiple fibers of multiplicities $m$ and $n$. Thus, $X^\gen$ is a Dolgachev surface of type $(m,n)$. \end{remark} \section{Exceptional vector bundles on Dolgachev surfaces}\label{sec:ExcepBundleOnX^g} In general, it is hard to understand how information of the central fiber is carried to the general fiber along the $\Q$-Gorenstein smoothing. Looking at the topology near the singularities of $X$, one can get a clue to relate information between $X$ and $X^\gen$. This section essentially follows the idea of Hacking. Some ingredients of Hacking's method, which are necessary for our application, are included in the appendix(Section~\ref{sec:Appendix}). Readers who want to look up the details are recommended to consult Hacking's original paper\,\cite{Hacking:ExceptionalVectorBundle}. \subsection{Topology of the singularities of $X$}\label{subsec:TopologyofX} Let $L_i \subseteq X$\,($i=1,2$) be the link of the singularity $P_i$. Then, $H_1(L_1,\Z) \simeq \Z/4\Z$ and $H_1(L_2,\Z) \simeq \Z/n^2\Z$\,({\it cf.} \cite[Proposition~13]{Manetti:NormalDegenerationOfPlane}). Since $\op{gcd}(2,n)=1$, $H_1(L_1,\Z) \oplus H_1(L_2,\Z) \simeq \Z/ 4n^2 \Z$ is a finite cyclic group. By \cite[p.~1191]{Hacking:ExceptionalVectorBundle}, $H_2(X,\Z) \to H_1(L_i,\Z)$ is surjective for each $i=1,2$, thus the natural map \[ H_2(X,\Z) \to H_1(L_1,\Z) \oplus H_1(L_2,\Z),\quad \alpha \mapsto ( \alpha \cap L_1 ,\, \alpha \cap L_2) \] is surjective. We have further information on groups the $H_1(L_i,\Z)$. \begin{theorem}[{\cite{Mumford:TopologyOfNormalSurfaceSingularity}}]\label{thm:MumfordTopologyOfLink} Let $X$ be a projective normal surface containing a $T_1$-singularity $P \in X$. Let $f \colon \widetilde X \to X$ be a good resolution ({\it i.e.} the exceptional divisor is simple normal crossing) of the singularity $P$, and let $E_1,\ldots,E_r$ be the integral exceptional divisors ordered in such a way that $(E_i . E_{i+1}) = 1$ for each $i=1,\ldots,r-1$. Let $\widetilde L\subseteq \widetilde X$ be the plumbing fixture (see Figure~\ref{fig:PlumbingFixture}) around $\bigcup E_i$, and let $\alpha_i \subset \widetilde L$ be the loop around $E_i$ oriented suitably. Then the following statements are true. \begin{enumerate} \item The group $H_1(\widetilde L,\Z)$ is generated by the loops $\alpha_i$. The relations are \[ \sum_j (E_i . E_j) \alpha_j = 0,\quad i=1,\,\ldots,\,r. \] \item Let $L \subset X$ be the link of the singularity $P \in X$. Then, $\widetilde L$ is homeomorphic to $L$. \end{enumerate} \begin{figure} \caption{Plumbing fixture around $\bigcup E_i$.} \label{fig:PlumbingFixture} \end{figure} \end{theorem} Proposition~\ref{prop:Manetti_PicLemma} provides a way to associate a Cartier divisor on $X$ with a Cartier divisor on $X^\gen$. This association can be extended as the following proposition illustrates. \begin{proposition}[{{\it cf.} \cite[Lemma~5.5]{Hacking:ExceptionalVectorBundle}}]\label{prop:Hacking_Specialization} Let $X$ be a projective normal surface, and let $(P \in X)$ be a $T_1$-singularity of type $\bigl( 0 \in \A^2 \big/ \frac{1}{n^2}(1,na-1)\bigr)$. Suppose $X$ admits a $\Q$-Gorenstein deformation $\mathcal X/(0 \in T)$ over a smooth curve germ $(0 \in T)$ such that $\mathcal X / (0 \in T)$ is a smoothing of $(P \in X)$, and is locally trivial outside $(P \in X)$. Let $X^\gen$ be the general fiber of $\mathcal X \to (0 \in T)$, and let $\mathcal B \subset \mathcal X$ be a sufficiently small open ball around $P \in \mathcal X$. Then the link $L$ and the Milnor fiber $M$ of $(P \in X)$ given as follows: \[ L = \partial \mathcal B \cap X^\gen,\qquad M = \mathcal B \cap X^\gen. \] In addition, let $B := \mathcal B \cap X$ be the contractible space. Then, the relative homology sequence for the pair $(X^\gen, M)$ yields the exact sequence \[ 0 \to H_2(X^\gen,\Z) \to H_2(X,\Z) \to H_1(M,\Z). \] Furthermore, a class in $H_2(X,\Z)$ lifts to a class in $H_2(X^\gen,\Z)$ if and only if its image under the map $H_2(X,\Z) \to H_1(L,\Z)$ is divisible by $n$. \end{proposition} \begin{proof} We have a sequence of isomorphisms \[ H_2(X^\gen,M) \simeq H_2(X^\gen \setminus M , \partial M) \simeq H_2(X\setminus B , \partial B) \simeq H_2(X,B) \simeq H_2(X). \] where the first and the third ones are the excisions, the second one is due to the topological description $X^\gen = (X \setminus B) \cup M$\,(\cite[p.~39]{Manetti:ModuliOfDiffeo}), and the last one is due to the contractibility of $B$. The relative homology sequence for the pair $(X^\gen, M)$ gives \[ 0 \to H_2(X^\gen) \to H_2(X^\gen,M) \simeq H_2(X) \to H_1(M). \] The map in the right is the composition $H_2(X) \to H_1(L) \to H_1(M)$, where $H_1(L) \to H_1(M)$ is the natural surjection $\Z/n^2 \Z \to \Z/n\Z$\,(\cite[Lemma~2.1]{Hacking:ExceptionalVectorBundle}). The last assertion follows immediately. \end{proof}
2,027
67,457
en
train
0.68.7
Recall that $Y$ is the rational elliptic surface constructed in Section~\ref{sec:Construction}, and $\pi \colon Y \to X$ is the contraction of $C_1,\,C_2,\,E_2,\,\ldots,\,E_r$. Proposition~\ref{prop:Hacking_Specialization} gives the short exact sequence \begin{equation} 0 \to H_2(X^\gen,\Z) \to H_2(X,\Z) \to H_1(M_1,\Z) \oplus H_1(M_2,\Z), \label{eq:SpecializationSequence} \end{equation} where $M_1$\,(resp. $M_2$) is the Milnor fiber of the smoothing of $(P_1 \in X)$\,(resp. $(P_2 \in X)$). In this case $H_2(X,\Z) \to H_1(L_1,\Z)\oplus H_1(L_2,\Z)$ is described as follows. If $D \in \Pic Y$, then $[\pi_*D] \in H_2(X,\Z)$ maps to \[ \bigl( (D.C_1) \alpha_{C_1},\ (D.C_2) \alpha_{C_2} + (D.E_2) \alpha_{E_2} + \ldots + (D.E_r) \alpha_{E_r}\bigr). \] Suppose $D \in \Pic Y$ is a divisor such that $(D. C_1) \in 2\Z$, $(D. C_2) \in n\Z$, and $(D.E_2) = \ldots = (D.E_r) = 0$. Then, Theorem~\ref{thm:MumfordTopologyOfLink} and (\ref{eq:SpecializationSequence}) imply that the cycle $[\pi_* D] \in H_2(X,\Z)$ maps to the zero element of $H_1(M_1,\Z) \oplus H_1(M_2,\Z)$. In particular, there is a cycle in $H_2(X^\gen)$, which maps to $[\pi_* D]$. Since $X^\gen$ is a nonsingular surface with $p_g = q = 0$, the first Chern class map and Poincar\'e duality induce the isomorphisms $\Pic X^\gen \simeq H^2(X^\gen,\Z) \simeq H_2(X^\gen,\Z)$\,(\cite[Proposition~4.11]{Kollar:Seifert}). We take the divisor $D^\gen \in \Pic X^\gen$ corresponding to $[\pi_* D] \in H_2(X,\Z)$. More detailed description of $D^\gen$ will be presented in \textsection\ref{subsec:HackingConstr}. We remark that even if $\pi_*D$ is an effective divisor, it does not necessarily mean that the resulting divisor $D^\gen$ is effective. \begin{example}\label{eg:MultipleFibers} If $D = E_{r+1}$, then $[\pi_*E_{r+1}] \in H_2(X,\Z)$ maps to $(0, \alpha_{E_r} + \alpha_{E_s} ) \in H_1(L_1,\Z) \oplus H_1(L_2,\Z)$, where $E_s$ is the other end component of the chain $\{C_2,E_2,\ldots,E_r\}$. It can be easily shown that either $\alpha_{E_s} = (na-1) \alpha_{E_r}$ or $\alpha_{E_r} = (na-1) \alpha_{E_s}$. In both cases, $\alpha_{E_r} + \alpha_{E_s}$ maps to the zero cycle along $H_1(L_2,\Z) \to H_1(M_2,\Z)$. It follows that $[\pi_*E_{r+1}] \in H_2(X,\Z)$ admits a preimage $E_{r+1}^\gen$ in $H_2(X^\gen,\Z) \simeq \Pic X^\gen$. By (\ref{eq:EquationOnFiber}) and Proposition~\ref{prop:SingIndexAndFiberCoefficients}, there are integers $a_2,\ldots,a_r \in \Z$ such that \[ C_0 = C_2 + a_2 E_2 + \ldots + a_r E_r + n E_{r+1}. \] This leads to $\pi_* C_0 = \pi_* ( C_2 + a_2 E_2 + \ldots + a_r E_r + n E_{r+1}) = \pi_* ( n E_{r+1})$. Since $\mathcal X/(0 \in T)$ is a $\Q$-Gorenstein deformation, $\pi_*( (n-2) E_{r+1} ) \equiv \frac{n-2}{n}\pi_* C_0 \equiv 2K_X$\,(Proposition~\ref{prop:SingularSurfaceX}) induces $(n-2)E_{r+1}^\gen = 2K_{X^\gen}$. The same argument says that there exists $E_1^\gen \in \Pic X^\gen$ with $(n-2)E_1^\gen = nK_{X^\gen}$. In particular, we find that both $2E_1^\gen$ and $nE_{r+1}^\gen$ are $\Q$-linearly equivalent to $\frac{2n}{n-2}K_{X^\gen}$, which is again $\Q$-linearly equivalent to the general fiber $C_0^\gen$. \end{example} The next proposition explains the way to find a preimage along the surjective map $H_2(X,\Z) \to H_1(L_1,\Z)\oplus H_1(L_2,\Z)$. \begin{proposition}\label{prop:DesiredDivisorOnY} Let $X$ be a projective normal surface with a cyclic quotient singularity $(P \in X)$, let $\pi \colon Y \to X$ be a resolution of $P \in X$, and let $E_1,\ldots,E_r \subset Y$ be the exceptional divisors over $P$. The first homology group of the link $L$ has the following presentation \[ \bigl\langle \alpha_1,\ldots,\alpha_r : \sum_{j=1}^r (E_i . E_j) \alpha_j = 0,\ i=1,\ldots,r \bigr\rangle. \] Let $D$ be a divisor on $Y$, and let $\ell_1,\ldots,\ell_r$ be integers satisfying \[ [\pi_* D] \cap L = \ell_1 \alpha_1 + \ldots + \ell_r \alpha_r \] in $H_1(L)$. Then there are integers $e_1,\ldots,e_r$ such that $D' := D + \sum_{j=1}^r e_jE_j $ satisfies $(D'.E_i) = \ell_i$ for each $i=1,\ldots,r$. \end{proposition} \begin{proof} Consider the free abelian group $\bigoplus_{j=1}^r \Z \cdot \tilde{\alpha}_j$ and the homomorphism \[ \bigoplus_{j=1}^r \Z \cdot \tilde\alpha_j \to H_1(L),\quad \tilde\alpha_j \mapsto \alpha_j. \] This map is clearly surjective. By Theorem~\ref{thm:MumfordTopologyOfLink}, the kernel is the abelian group generated by \[ \Bigl\{ R_i := \sum_{j=1}^r(E_i . E_j) \alpha_j : i=1,\ldots,r\Bigr\}. \] Since $[\pi_*D] \cap L = \sum_{i=1}^r ( D. E_j) \alpha_j$, the equality $[\pi_*D] \cap L = \sum_{j=1}^r \ell_j \alpha_j$ implies that there are integers $e_1,\ldots,e_r$ such that $\sum_{j=1}^r \ell_j \tilde\alpha_j - \sum_{j=1}^r ( D.E_j) \tilde \alpha_j = e_1 R_1 + \ldots + e_r R_r$. This leads to \begin{align*} \sum_{j=1}^r \ell_j \tilde \alpha_j &= \sum_{j=1}^r(D.E_j) \tilde \alpha_j + \sum_{i=1}^r \sum_{j=1}^r (e_i E_i . E_j) \tilde \alpha_j \\ &= \sum_{j=1}^r ( D + e_1 E_1 + \ldots + e_r E_r \mathbin. G_j) \tilde\alpha_j. \end{align*} Taking $D' = D + e_1 E_1 + \ldots + E_r$, we get $(D'.E_i)=\ell_i$ for each $i=1,\ldots,r$. \end{proof}
2,274
67,457
en
train
0.68.8
\subsection{Exceptional vector bundles on $X^\gen$}\label{subsec:HackingConstr} We keep the notations in Section~\ref{sec:Construction}, namely, $Y$ is the rational elliptic surface\,(Figure~\ref{fig:Configuration_General}), $\pi \colon Y \to X$ is the contraction in Proposition~\ref{prop:SingularSurfaceX}. Let $(0 \in T)$ be the base space of the versal deformation ${\mathcal X^{\rm ver} / (0 \in T)}$ of $X$, and let $(0 \in T_i)$ be the base space of the versal deformation $(P_i \in \mathcal X^{\rm ver}) / (0 \in T_i)$ of the singularity $(P_i \in X)$. By Lemma~\ref{lem:NoObstruction} and \cite[Lemma~7.2]{Hacking:ExceptionalVectorBundle}, there exists a smooth morphism \[ \mathfrak T \colon (0 \in T) \to \textstyle\prod_i (0 \in T_i). \] For each $i=1,2$, take the base extensions $( 0 \in T_i') \to (0 \in T_i)$ to which Proposition~\ref{prop:HackingWtdBlup} can be applied. Then, there exists a Cartesian diagram \[ \begin{xy} (0,0)*+{(0 \in T)}="00"; (30,0)*+{\textstyle \prod_i ( 0 \in T_i)}="10"; (0,15)*+{(0 \in T')}="01"; "10"+"01"-"00"*+{\textstyle \prod_i ( 0 \in T_i')}="11"; {\ar^(0.44){\mathfrak T} "00";"10"}; {\ar^(0.44){\mathfrak T'} "01";"11"}; {\ar "01";"00"}; {\ar "11";"10"}; \end{xy}. \] Let $\mathcal X' / (0 \in T')$ be the deformation obtained by pulling back $\mathcal X^{\rm ver} / ( 0 \in T)$ along $(0 \in T') \to (0 \in T)$. By Proposition~\ref{prop:HackingWtdBlup}, there exists a proper birational map $\Phi \colon \tilde{\mathcal X} \to \mathcal X'$ such that the central fiber $\tilde {\mathcal X}_0 = \Phi^{-1}(\mathcal X_0')$ is the union of three irreducible components $\tilde X_0$, $W_1$, $W_2$, where $\tilde X_0$ is the proper transform of $X = \mathcal X_0'$, and $W_1$\,(resp. $W_2$) is the exceptional locus over $P_1$\,(resp. $P_2$). The intersection $Z_i := \tilde X_0 \cap W_i$\,($i=1,2$) is a smooth rational curve. From now on, assume $a=1$. This is the case in which the resolution graph of the singular point $P_2 \in X$ forms the chain $C_2,\,E_2,\,\ldots,\,E_r$ in this order. Indeed, the resolution graph of a cyclic quotient singularity $\bigl(0 \in \A^2 / \frac{1}{n^2}(1,n-1)\bigr)$ is \!\!\!\!\! \raisebox{-11pt}[0pt][13pt]{ \begin{tikzpicture} \draw(0,0) node[anchor=center] (E1) {}; \draw(30pt,0pt) node[anchor=center] (E2) {}; \draw(60pt,0pt) node[anchor=center, inner sep=10pt] (E3) {}; \draw(90pt,0pt) node[anchor=center] (E4) {}; \fill[black] (E1) circle (1.5pt); \fill[black] (E2) circle (1.5pt); \draw (E3) node[anchor=center]{$\cdots$}; \fill[black] (E4) circle (1.5pt); \node[black,below,shift=(90:2pt)] at (E1.south) {$\scriptscriptstyle -(n+2)\ $}; \node[black,below,shift=(90:2pt)] at (E2.south) {$\scriptscriptstyle -2$}; \node[black,below,shift=(90:2pt)] at (E4.south) {$\scriptscriptstyle -2$}; \draw[-] (E1.east) -- (E2.west); \draw[-] (E2.east) -- (E3.west); \draw[-] (E3.east) -- (E4.west); \end{tikzpicture} }\!\!($(n-1)$ vertices). Let $\iota \colon Y \to \tilde X_0$ be the contraction of $E_2,\ldots,E_r$\,(see Proposition~\ref{prop:HackingWtdBlup}\ref{item:prop:HackingWtdBlup}). As noted in Remark~\ref{rmk:SimplestSingularCase}, $W_1$ is isomorphic to $\P^2$, $Z_1$ is a smooth conic in $W_1$, hence $\mathcal O_{W_1}(1)\big\vert_{Z_1} = \mathcal O_{Z_1}(2)$. Also, \begin{equation} W_2 \simeq \P_{x,y,z}(1,n-1,1),\quad Z_2 = (xy=z^n) \subset W_2,\quad \text{and}\quad \mathcal O_{W_2}(n-1)\big\vert_{Z_2} = \mathcal O_{Z_2}(n). \label{eq:SecondWtdBlowupExceptional} \end{equation} The last equality can be verified as follows: let $h_{W_2} = c_1(\mathcal O_{W_2}(1))$, then $(n-1)h_{W_2}^2 = 1$, so \linebreak$\bigl( c_1(\mathcal O_{W_2}(n-1)) \mathbin . Z_2\bigr) = \bigl( (n-1)h_{W_2} \mathbin. nh_{W_2} \bigr) = n$. In what follows, we construct exceptional vector bundles on the reducible surface $\tilde{\mathcal X_0} = \tilde X_0 \cup W_1 \cup W_2$ by gluing suitable vector bundles on each irreducible component which have isomorphic restrictions to the intersection curves $Z_i$. \begin{proposition}\label{prop:VectorBundleOnReducibleSurface} Let $D \in \Pic Y$ be a divisor such that $(D.E_i) = 0$ for $i=2,\ldots,r$. \begin{enumerate} \item Assume that $(D.C_1) =2d_1 \in 2\Z$, $(D.C_2) = nd_2\in n\Z$. Then, there exists a line bundle $\tilde {\mathcal D}$ on the reducible surface $\tilde{\mathcal X}_0 = \tilde X_0 \cup W_1 \cup W_2$ satisfying \[ \tilde{\mathcal D}\big\vert_{\tilde X_0} \simeq \mathcal O_{\tilde X_0}(\iota_* D),\quad \tilde{\mathcal D}\big\vert_{W_1} \simeq \mathcal O_{W_1}(d_1),\quad\text{and}\quad \tilde{\mathcal D}\big\vert_{W_2} \simeq \mathcal O_{W_2}((n-1)d_2). \] \item Assume that $(D.C_1) = 1$, $(D.C_2) = 0$, and that there exists an exceptional vector bundle $G_1$ of rank $2$ on $W_1$ such that $G_1\big\vert_{Z_1} \simeq \mathcal O_{Z_1}(1)^{\oplus 2}$. Then, there exists a vector bundle $\tilde{\mathcal V}_1$ on $\tilde{\mathcal X}_0$ satisfying \[ \tilde{\mathcal V}_1 \big\vert_{\tilde X_0} \simeq \mathcal O_{\tilde X_0}(\iota_* D)^{\oplus 2},\quad \tilde{\mathcal V}_1 \big\vert_{W_1} \simeq G_1,\quad\text{and}\quad \tilde{\mathcal V}_1 \big\vert_{W_2} \simeq \mathcal O_{W_2}^{\oplus 2}. \] \item Assume that $(D.C_1)=0$, $(D.C_2) = 1$, and that there exists an exceptional vector bundle $G_2$ of rank $n$ on $W_2$ such that $G_2 \big\vert_{Z_2} \simeq \mathcal O_{Z_2}(1)^{\oplus n}$. Then, there exists a vector bundle $\tilde{\mathcal V}_2$ on $\tilde{\mathcal X}_0$ satisfying \[ \tilde{\mathcal V}_2 \big\vert_{\tilde X_0} \simeq \mathcal O_{\tilde X_0}(\iota_* D)^{\oplus n},\quad \tilde{\mathcal V}_2 \big\vert_{W_1} \simeq \mathcal O_{W_1}^{\oplus n},\quad\text{and}\quad \tilde{\mathcal V}_2 \big\vert_{W_2} \simeq G_2. \] \end{enumerate} Furthermore, all the bundles introduced above are exceptional. \end{proposition} \begin{proof} For all of those three cases, the ``ingredient bundles'' on irreducible components have isomorphic restrictions on $Z_i$, hence $\tilde{\mathcal E}(= \tilde{\mathcal D},\,\tilde{\mathcal V}_1,\,\tilde{\mathcal V}_2)$ exists as a vector bundle in the exact sequence\,(\textit{cf.}~\cite[Lemma~7.3]{Hacking:ExceptionalVectorBundle}) \begin{equation} 0 \to \tilde{\mathcal E} \to \tilde{\mathcal E}\big\vert_{\tilde X_0} \oplus \tilde{\mathcal E}\big\vert_{W_1} \oplus \tilde{\mathcal E}\big\vert_{W_2} \to \tilde{\mathcal E}\big\vert_{Z_1} \oplus \tilde{\mathcal E} \big\vert_{Z_2} \to 0.\label{eq:ExactSeq_onReducibleSurface} \end{equation} Conversely, given any vector bundle on $\tilde{\mathcal X}_0$, one can consider the exact sequence of the form (\ref{eq:ExactSeq_onReducibleSurface}). We plug the corresponding endomorphism sheaf into the sequence (\ref{eq:ExactSeq_onReducibleSurface}) to verify that $\tilde{\mathcal E}$ is exceptional. Replacing $\tilde{\mathcal E}$ by $\varEnd(\tilde{\mathcal D}) \simeq \tilde{\mathcal D}^\vee \otimes \tilde{\mathcal D} \simeq \mathcal O_{\tilde{\mathcal X}_0}$, we rewrite (\ref{eq:ExactSeq_onReducibleSurface}) as \[ 0 \to \mathcal O_{\tilde{\mathcal X}_0} \to \mathcal O_{\tilde X_0} \oplus \mathcal O_{W_1} \oplus \mathcal O_{W_2} \to \mathcal O_{Z_1} \oplus \mathcal O_{Z_2} \to 0. \] Looking at cohomologies, we can easily verify that $h^p(\mathcal O_{\tilde{\mathcal X}_0}) = h^p( \mathcal O_{\tilde X_0})$. Using the same argument as in Proposition~\ref{prop:SingularSurfaceX}(a), we find \[ H^p(\mathcal O_{\tilde X_0}) = \left\{ \begin{array}{ll} \C & p=0 \\ 0 & p\neq 0. \end{array} \right. \] Since $\tilde{\mathcal D}$ is locally free, $\varExt^q(\tilde{\mathcal D},\tilde{\mathcal D})=0$ for $q \neq 0$. By the local-to-global spectral sequence \[ E_2^{p,q} = H^p( \varExt^q(\tilde{\mathcal D},\tilde{\mathcal D})) \Rightarrow H^{p+q}( \End(\tilde{\mathcal D})), \] $h^p(\End(\tilde{\mathcal D})) \simeq \dim_\C E_2^{p,0} = h^p(\mathcal O_{\tilde{\mathcal X}_0})$, showing that $\tilde{\mathcal D}$ is an exceptional line bundle. Now, we consider (\ref{eq:ExactSeq_onReducibleSurface}) for $\tilde{\mathcal E} = \varEnd(\tilde{\mathcal V}_1)$ which reads \[ 0 \to \varEnd(\tilde{\mathcal V}_1) \to \mathcal O_{\tilde X_0}^{\oplus 4} \oplus \varEnd(G_1) \oplus \mathcal O_{W_2}^{\oplus 4} \to \mathcal O_{Z_1}^{\oplus 4} \oplus \mathcal O_{Z_2}^{\oplus 4} \to 0. \] Since the restrictions $H^0(\mathcal O_{\tilde X_0}) \to H^0(\mathcal O_{Z_1})$, $H^0(\mathcal O_{W_2}) \to H^0(\mathcal O_{Z_2})$ are surjective, we have $h^p(\varEnd(\tilde{\mathcal V}_1)) = h^p(\varEnd(G_1))$. Using the local-to-global spectral sequences for the sheaves $\varEnd(\tilde{\mathcal V}_1)$, $\varEnd(G_1)$ and proceed as done in (a), we can conclude that $h^p(\End(\tilde{\mathcal V}_1)) = h^p(\varEnd(\tilde{\mathcal V}_1)) = h^p(\varEnd(G_1)) = h^p(\End(G_1))$, thus $\tilde{\mathcal V}_1$ is an exceptional vector bundle. Similarly, one can prove that $\tilde{\mathcal V}_2$ is an exceptional vector bundle. \qedhere \end{proof}
3,741
67,457
en
train
0.68.9
We use Proposition~\ref{prop:DesiredDivisorOnY} to find the divisors satisfying the conditions described in Proposition~\ref{prop:VectorBundleOnReducibleSurface}(b) and (c). \begin{lemma}\label{lem: Divisor for Higher ranks} Let $N_1,N_2$ be solutions of the systems of congruence equations \[ N_1 \equiv \left\{ \begin{array}{ll} 1 \mod 4 \\ 0 \mod n^2 \end{array} \right. \qquad N_2 \equiv \left\{ \begin{array}{ll} 0 \mod 4 \\ 1 \mod n^2\,. \end{array} \right. \] Then, \begin{enumerate} \item there are integers $e,e_1,\ldots,e_r \in \Z$ such that $V_1 := N_1 F_1 + e C_1 + e_1 C_2 + e_2 E_2 + \ldots + e_r E_r$ satisfies $(V_1.C_1)=1$ and $(V_1.C_2)=(V_1.E_2)=\ldots=(V_1.E_r)=0$; \item there are integers $f,f_1,\ldots,f_r \in \Z$ such that $V_2 := N_2 F_1 + f C_1 + f_1 C_2 + f_2 E_2 + \ldots + f_r E_r$ satisfies $(V_2.C_2)=1$ and $(V_2.C_1)=(V_2.E_2)=\ldots=(V_2.E_r)=0$. \end{enumerate} \end{lemma} \begin{proof} By the choices of $N_1,N_2$, we have \[ \bigl( [\pi_* (N_i F_1)] \cap L_1 ,\, [\pi_*(N_i F_1)] \cap L_2 \bigr) = \left\{ \begin{array}{ll} ( \alpha_{C_1},\, 0 ) & i=1 \\ ( 0,\, \alpha_{C_2}) & i=2 \end{array} \right. \] in $H_1(L_1) \oplus H_1(L_2)$. Applying Proposition~\ref{prop:DesiredDivisorOnY}, we get the desired result. \end{proof} Referring to Proposition~\ref{prop:VectorBundleOnReducibleSurface} and Lemma~\ref{lem: Divisor for Higher ranks}, we can assemble several exceptional vector bundles on the reducible surface $\tilde{\mathcal X_0} = \tilde X_0 \cup W_1 \cup W_2$\,(Table~\ref{table:ExcBundles_OnSingular}). \begin{center} \begin{tabular}{c|c|c|c} \raisebox{-5pt}[11pt][6pt]{}$\tilde{\mathcal X}_0$ & $\tilde X_0$ & $W_1$ & $W_2$ \\ \hline \raisebox{-5pt}[11pt][7pt]{}$\mathcal O_{\tilde{\mathcal X}_0}$ & $\mathcal O_{\tilde X_0}$ & $\mathcal O_{W_1}$ & $\mathcal O_{W_2}$ \\ \raisebox{-5pt}[11pt][7pt]{}$\tilde{\mathcal F}_{ij}\,{\scriptstyle (1 \leq i\neq j \leq 9)}$ & $\mathcal O_{\tilde X_0}(\iota_*(F_i - F_j))$ & $\mathcal O_{W_1}$ & $\mathcal O_{W_2}$ \\ \raisebox{-5pt}[11pt][7pt]{}$\tilde{\mathcal C}_0$ & $\mathcal O_{\tilde X_0}(\iota_*C_0)$ & $\mathcal O_{W_1}$ & $\mathcal O_{W_2}$ \\ \raisebox{-1pt}[11pt][7pt]{$\tilde{\mathcal K}$} & $\mathcal O_{\tilde X_0}(\iota_* K_Y)$ & $\mathcal O_{W_1}(1)$ & $\mathcal O_{W_2}(n-1)$ \\ \raisebox{-1pt}[11pt][7pt]{$\tilde{\mathcal V}_1$} & $\mathcal O_{\tilde X_0}(\iota_*V_1)^{\oplus 2}$ & $\mathcal T_{W_1}(-1)$ & $\mathcal O_{W_2}^{\oplus 2}$ \\ \raisebox{-1pt}[11pt][7pt]{$\tilde{\mathcal V}_2$} & $\mathcal O_{\tilde X_0}(\iota_*V_2)^{\oplus n}$ & $\mathcal O_{W_1}^{\oplus n}$ & $\mathcal \mathcal G_2$ \end{tabular}\vskip-0.33\baselineskip \nopagebreak\captionof{table}{Examples of exceptional vector bundles constructed using Proposition~\ref{prop:VectorBundleOnReducibleSurface}}\label{table:ExcBundles_OnSingular} \end{center} Standard arguments, such as \cite[p.~1181]{Hacking:ExceptionalVectorBundle}, in the deformation theory say that if an exceptional vector bundle $\tilde{\mathcal D}$ is given in the central fiber of the family $\tilde{\mathcal X}/(0 \in T)$, then it deforms uniquely in a small neighborhood of the family, \textit{i.e.} there exists a vector bundle $\mathscr D$ on $\tilde{\mathcal X}$\,(shrinking $T$ if necessary) such that $\mathscr D \big\vert_{\tilde{ \mathcal X}_0} = \tilde{\mathcal D}$. \begin{proposition}\label{prop:Hacking vs Topological} Let $\tilde {\mathcal D}$ be the exceptional line bundle on the reducible surface $\tilde{\mathcal X}_0$ obtained in Proposition~\ref{prop:VectorBundleOnReducibleSurface}. Let $\mathscr D$ be a line bundle on $\tilde{\mathcal X}$ such that $\mathscr D \big\vert_{\tilde{\mathcal X}_0} = \tilde{\mathcal D}$. Then, $\mathscr D\big\vert_{X^\gen} = \mathcal O_{X^\gen}(D^\gen)$ where $D^\gen$ is the divisor introduced in \textsection\ref{subsec:TopologyofX}. \end{proposition} \begin{proof} Let $\mathcal B \subset \mathcal X$ be the disjoint union of two small balls around $P_i \in \mathcal X$, and let $\tilde{\mathcal B} = \Phi^{-1}\mathcal B$. Using the argument in \cite[p.~1192]{Hacking:ExceptionalVectorBundle}, we observe that the class $c_1\bigl(\mathscr D\big\vert_{\tilde{\mathcal X}_t \setminus \tilde{\mathcal B}_t}\bigr) \in H^2(\tilde{\mathcal X}_t \setminus \tilde{\mathcal B}_t)$ is independent of $t$ when we identify groups $\{H^2(\tilde{\mathcal X}_t \setminus \tilde{\mathcal B}_t)\}_t$ in the natural way. For $t=0$, Poincar\'e duality on manifolds with boundaries gives a sequence of isomorphisms \begin{equation} H^2(\tilde{\mathcal X}_0 \setminus \tilde{\mathcal B}_0) \simeq H_2(X \setminus B, \partial B) \simeq H_2(X,B) \simeq H_2(X), \label{eq: Specialization as Poincare dual} \end{equation} which convey $c_1(\mathscr D\big\vert_{\tilde{\mathcal X}_0 \setminus \tilde{\mathcal B}_0})$ to $[\pi_*D] \in H_2(X)$. As topological cycles, both $c_1(D^\gen\big\vert_{\tilde{\mathcal X}_t \setminus \tilde{\mathcal B}_t})$ and $c_1\bigl(\mathscr D\big\vert_{\tilde{\mathcal X}_t \setminus \tilde{\mathcal B}_t}\bigr)$ are obtained from $[\pi_*D\big\vert_{X \setminus B}]$ by the trivial extension, hence they coincide. The injective map $H_2(X^\gen) \to H_2(X)$ defined in Proposition~\ref{prop:Hacking_Specialization} is nothing but the natural restriction $H^2(X^\gen) \to H^2(X^\gen \setminus M)$, where the source and the target are changed by Poincar\'e duality on manifolds with boundaries. Thus, $H^2(X^\gen) \to H^2(X^\gen \setminus M)$ is injective, so $c_1(D^\gen) = c_1(\mathscr D\big\vert_{X^\gen})$. The first Chern class map $c_1 \colon \Pic X^\gen \to H^2(X^\gen,\Z)$ is an isomorphism, hence $\mathcal O_{X^\gen}(D^\gen) = \mathscr D\big\vert_{X^\gen}$. \end{proof} We finish this section by presenting an exceptional collection of length $9$ on the Dolgachev surface $X^\gen$. Note that this collection cannot generate the whole category $\D^{\rm b}(X^\gen)$. \begin{proposition}\label{prop:ExceptCollection_ofLengthNine} Let $F_{1j}^\gen\, (j>1)$ be the divisor on $X^\gen$, which arises from the deformation of $\tilde {\mathcal F}_{1j}$ along $\tilde{\mathcal X} / (0 \in T')$. Then the ordered tuple \[ \bigl\langle \mathcal O_{X^\gen},\, \mathcal O_{X^\gen}(F_{12}^\gen),\,\ldots,\, \mathcal O_{X^\gen}(F_{19}^\gen) \bigr\rangle \] forms an exceptional collection in the derived category $\D^{\rm b}(X^\gen)$. \end{proposition} \begin{proof} By virtue of upper-semicontinuity, it suffices to prove that $H^p(\tilde{\mathcal X}_0, \tilde{\mathcal F}_{1i} \otimes \tilde{\mathcal F}_{1j}^\vee) = 0$ for $1 \leq i < j \leq 9$ and $p \geq 0$. The sequence (\ref{eq:ExactSeq_onReducibleSurface}) for $\tilde{\mathcal E} = \tilde{\mathcal F}_{1i} \otimes \tilde{\mathcal F}_{1j}^\vee$ reads \[ 0 \to \tilde{\mathcal F}_{1i} \otimes \tilde{\mathcal F}_{1j}^\vee \to \mathcal O_{\tilde X_0}(\iota_* ( F_j - F_i)) \oplus \mathcal O_{W_1} \oplus \mathcal O_{W_2} \to \mathcal O_{Z_1} \oplus \mathcal O_{Z_2} \to 0. \] Since $H^0(\mathcal O_{W_k}) \simeq H^0(\mathcal O_{Z_k})$ and $H^p(\mathcal O_{W_k}) = H^p (\mathcal O_{Z_k}) = 0$ for $k=1,2$ and $p > 0$, it suffices to prove that $H^p(\mathcal O_{\tilde X_0} ( \iota_*(F_j - F_i) ) ) = 0$ for all $p\geq 0$ and $i< j$. The surface $\tilde X_0$ is normal({\it cf.} \cite[p.~1178]{Hacking:ExceptionalVectorBundle}) and the divisor $F_j - F_i$ does not intersect with the exceptional locus of $\iota \colon Y \to \tilde X_0$. By Proposition~\ref{prop:CohomologyComparison_YtoX}, $H^p( \tilde X_0, \iota_*(F_j-F_i)) \simeq H^p(Y, F_j-F_i)$ for all $p \geq 0$. It remains to prove that $H^p(Y, F_j-F_i) = 0$ for $p \geq 0$. By Riemann-Roch, \[ \chi(F_j-F_i) = \frac{1}{2} ( F_j-F_i \mathbin. F_j - F_i - K_Y) + 1, \] and this is zero by Lemma~\ref{lem:CanonicalofY}. Since $(F_j\mathbin.F_j - F_i) = -1$ and $F_i \simeq \P^1$, in the short exact sequence \[ 0 \to \mathcal O_Y(-F_i) \to \mathcal O_Y(F_j-F_i) \to \mathcal O_{F_i}(F_j) \to 0, \] we obtain $H^0(-F_i) \simeq H^0(F_j-F_i)$. In particular, $H^0(F_j-F_i) = 0$. By Serre duality and Lemma~\ref{lem:CanonicalofY}, $H^2(F_j-F_i) = H^0(E_1 + F_i - F_j - C_2 - \ldots - E_{r+1})^*$. Similarly, since $(E_1\mathbin.E_1 + F_i - F_j - C_2 - \ldots - E_{r+1}) < 0$, $(F_i\mathbin.F_i - F_i - C_2 - \ldots - E_{r+1}) < 0$, and $E_1$, $F_j$ are rational curves, $H^0( E_1 + F_i - F_j - C_2 - \ldots - E_{r+1}) \simeq H^0(-F_j - C_2 - \ldots - E_{r+1}) = 0$. This proves that $H^2(F_j-F_i) = 0$. Finally, $\chi (F_j - F_i) = 0$ implies $H^1(F_j - F_i) =0$. \end{proof}
3,660
67,457
en
train
0.68.10
We finish this section by presenting an exceptional collection of length $9$ on the Dolgachev surface $X^\gen$. Note that this collection cannot generate the whole category $\D^{\rm b}(X^\gen)$. \begin{proposition}\label{prop:ExceptCollection_ofLengthNine} Let $F_{1j}^\gen\, (j>1)$ be the divisor on $X^\gen$, which arises from the deformation of $\tilde {\mathcal F}_{1j}$ along $\tilde{\mathcal X} / (0 \in T')$. Then the ordered tuple \[ \bigl\langle \mathcal O_{X^\gen},\, \mathcal O_{X^\gen}(F_{12}^\gen),\,\ldots,\, \mathcal O_{X^\gen}(F_{19}^\gen) \bigr\rangle \] forms an exceptional collection in the derived category $\D^{\rm b}(X^\gen)$. \end{proposition} \begin{proof} By virtue of upper-semicontinuity, it suffices to prove that $H^p(\tilde{\mathcal X}_0, \tilde{\mathcal F}_{1i} \otimes \tilde{\mathcal F}_{1j}^\vee) = 0$ for $1 \leq i < j \leq 9$ and $p \geq 0$. The sequence (\ref{eq:ExactSeq_onReducibleSurface}) for $\tilde{\mathcal E} = \tilde{\mathcal F}_{1i} \otimes \tilde{\mathcal F}_{1j}^\vee$ reads \[ 0 \to \tilde{\mathcal F}_{1i} \otimes \tilde{\mathcal F}_{1j}^\vee \to \mathcal O_{\tilde X_0}(\iota_* ( F_j - F_i)) \oplus \mathcal O_{W_1} \oplus \mathcal O_{W_2} \to \mathcal O_{Z_1} \oplus \mathcal O_{Z_2} \to 0. \] Since $H^0(\mathcal O_{W_k}) \simeq H^0(\mathcal O_{Z_k})$ and $H^p(\mathcal O_{W_k}) = H^p (\mathcal O_{Z_k}) = 0$ for $k=1,2$ and $p > 0$, it suffices to prove that $H^p(\mathcal O_{\tilde X_0} ( \iota_*(F_j - F_i) ) ) = 0$ for all $p\geq 0$ and $i< j$. The surface $\tilde X_0$ is normal({\it cf.} \cite[p.~1178]{Hacking:ExceptionalVectorBundle}) and the divisor $F_j - F_i$ does not intersect with the exceptional locus of $\iota \colon Y \to \tilde X_0$. By Proposition~\ref{prop:CohomologyComparison_YtoX}, $H^p( \tilde X_0, \iota_*(F_j-F_i)) \simeq H^p(Y, F_j-F_i)$ for all $p \geq 0$. It remains to prove that $H^p(Y, F_j-F_i) = 0$ for $p \geq 0$. By Riemann-Roch, \[ \chi(F_j-F_i) = \frac{1}{2} ( F_j-F_i \mathbin. F_j - F_i - K_Y) + 1, \] and this is zero by Lemma~\ref{lem:CanonicalofY}. Since $(F_j\mathbin.F_j - F_i) = -1$ and $F_i \simeq \P^1$, in the short exact sequence \[ 0 \to \mathcal O_Y(-F_i) \to \mathcal O_Y(F_j-F_i) \to \mathcal O_{F_i}(F_j) \to 0, \] we obtain $H^0(-F_i) \simeq H^0(F_j-F_i)$. In particular, $H^0(F_j-F_i) = 0$. By Serre duality and Lemma~\ref{lem:CanonicalofY}, $H^2(F_j-F_i) = H^0(E_1 + F_i - F_j - C_2 - \ldots - E_{r+1})^*$. Similarly, since $(E_1\mathbin.E_1 + F_i - F_j - C_2 - \ldots - E_{r+1}) < 0$, $(F_i\mathbin.F_i - F_i - C_2 - \ldots - E_{r+1}) < 0$, and $E_1$, $F_j$ are rational curves, $H^0( E_1 + F_i - F_j - C_2 - \ldots - E_{r+1}) \simeq H^0(-F_j - C_2 - \ldots - E_{r+1}) = 0$. This proves that $H^2(F_j-F_i) = 0$. Finally, $\chi (F_j - F_i) = 0$ implies $H^1(F_j - F_i) =0$. \end{proof} \begin{remark}\label{rmk:ExceptCollection_SerreDuality} In Proposition~\ref{prop:ExceptCollection_ofLengthNine}, the trivial bundle $\mathcal O_{X^\gen}$ can be replaced by the deformation of the line bundle $\tilde{\mathcal K}^\vee$\,(Table~\ref{table:ExcBundles_OnSingular}). The strategy of the proof differs nothing. Since $\tilde{\mathcal K}^\vee$ deforms to $\mathcal O_{X^\gen}(-K_{X^\gen})$, taking dual shows that \[ \bigl\langle \mathcal O_{X^\gen}(F_{21}^\gen),\,\ldots,\, \mathcal O_{X^\gen}(F_{91}^\gen) ,\, \mathcal O_{X^\gen}(K_{X^\gen}) \bigr\rangle \] is also an exceptional collection in $\D^{\rm b}(X^\gen)$. This will be used later\,(see Step~\ref{item:ProofFreePart_thm:ExceptCollection_MaxLength} in the proof of Theorem~\ref{thm:ExceptCollection_MaxLength}). \end{remark} \section{The N\'eron-Severi lattices of Dolgachev surfaces of type $(2,3)$}\label{sec:NeronSeveri} This section is devoted to study the simplest case, namely, the case $n=3$ and $a=1$. The surface $Y$ has the configuration as in Figure~\ref{fig:Configuration_Basic}. We cook up several divisors on $X^\gen$ according to the recipe designed below. \begin{recipe}\label{recipe:PicardLatticeOfDolgachev} Recall that $\pi \colon Y \to X$ is the contraction of $C_1, C_2, E_2$ and $\iota \colon Y \to \tilde X_0$ is the contraction of $E_2$. \begin{enumerate}[label=\normalfont(\arabic{enumi})] \item Pick a divisor $D \in \Pic Y$ satisfying $(D.C_1) \in 2 \Z$, $(D. C_2) \in 3\Z$, and $(D. E_2) = 0$. \item As in Proposition~\ref{prop:VectorBundleOnReducibleSurface}, attach suitable line bundles on $W_i$\,($i=1,2$) to $\mathcal O_{\tilde X_0}(\iota_* D)$ to produce a line bundle, say $\tilde{\mathcal D}$, on $\tilde{\mathcal X}_0 = \tilde X_0 \cup W_1 \cup W_2$. It deforms to a line bundle $\mathcal O_{X^\gen}(D^\gen)$ on the Dolgachev surface $X^\gen$. \item Use the short exact sequence (\ref{eq:ExactSeq_onReducibleSurface}) to compute $\chi(\tilde {\mathcal D})$. By the deformation invariance of Euler characteristics, $\chi(D^\gen) = \chi(\tilde {\mathcal D})$. \item\label{item:recipe:CanonicalIntersection} Since the divisor $\pi_* C_0$ is away from the singularities of $X$, it is Cartier. By Lemma~\ref{lem:Intersection_withFibers}, $(C_0^\gen.D^\gen) = (C_0.D)$. Furthermore, $C_0^\gen = 6K_{X^\gen}$, thus the Riemann-Roch formula on the surface $X^\gen$ reads \[ (D^\gen)^2 = \frac{1}{6}( D . C_0) + 2 \chi(\tilde {\mathcal D}) - 2. \] This computes the intersections of divisors in $X^\gen$. \end{enumerate} \end{recipe} By Proposition~\ref{prop:Hacking vs Topological}, $D^\gen$ is essentially determined by looking at the preimage of the cycle class $[\pi_*D] \in H_2(X)$ along the map in the sequence (\ref{eq:SpecializationSequence}). This suggests the following use of the terminology. \begin{definition} Let $D \in \Pic Y$ and $D^\gen \in \Pic X^\gen$ be as in Recipe~\ref{recipe:PicardLatticeOfDolgachev}. We call $D^\gen$ the \emph{lifting} of $D$. \end{definition} We note that this is a slight abuse of terminologies. What lifts to $D^\gen \in \Pic X^\gen$ is $\pi_* D \in \Cl X$, not $D \in \Pic Y$. \begin{lemma}\label{lem:EulerChar_WtdProj} Let $h \in H_2(W_2,\Z)$ be the hyperplane class of $W_2 = \P(1,2,1)$. For any even integer $n \in \Z$, \[ \chi( \mathcal O_{W_2}(n) ) = \frac{1}{4}n(n+4) + 1. \] \end{lemma} \begin{proof} By well-known properties of weighted projective spaces, $(1 \cdot 2 \cdot 1)h^2 = 1$, $c_1(K_{W_2}) = -(1+2+1)h = -4h$, and $\mathcal O_{W_2}(2)$ is invertible. The Riemann-Roch formula for invertible sheaves\,({\it cf.} \cite[Lemma~7.1]{Hacking:ExceptionalVectorBundle}) says that $\chi(\mathcal O_{W_2}(n)) = \frac{1}{2}( nh \mathbin. (n+4)h) + 1 = \frac{1}{4}n(n+4) + 1$. \end{proof} \begin{lemma}\label{lem:EulerCharacteristics} Let $S$ be a projective normal surface with $\chi(\mathcal O_S) = 1$. Assume that all the divisors below are supported on the smooth locus of $S$. Then, \begin{enumerate}[ref=(\alph{enumi})] \item $\chi(D_1 + D_2) = \chi(D_1) + \chi(D_2) + (D_1 . D_2) - 1$; \item $\chi(-D) = -\chi(D) + D^2 + 2$; \item $\chi(-D) = p_a(D)$ where $p_a(D)$ is the arithmetic genus of $D$; \item $\chi(nD) = n\chi(D) + \frac{1}{2} n(n-1)D^2 - n + 1$ for all $n \in \Z$. \item[\rm(d$'$)] $\chi(nD) = n^2\chi(D) + \frac{1}{2}n(n-1) (K_S .D) - n^2 + 1$ for all $n \in \Z$. \end{enumerate} Assume in addition that $D$ is an integral curve with $p_a(D) = 0$. Then \begin{enumerate}[resume] \item $\chi(D) = D^2 + 2$, $\chi(-D) = 0$; \item $\chi(nD) = \frac{1}{2}n(n+1)D^2 + (n+1)$ for all $n \in \Z$. \end{enumerate} \end{lemma} \begin{proof} All the formula in the statement are simple variants of the Riemann-Roch formula. \end{proof} \begin{lemma}\label{lem:Intersection_withFibers} Let $D$, $\tilde{\mathcal D}$, $D^\gen$ be as in Recipe~\ref{recipe:PicardLatticeOfDolgachev}. Then, $(C_0.D) = (C_0^\gen . D^\gen)$. \end{lemma} \begin{proof} Since $C_0$ does not intersect with $C_1,C_2,E_2$, the corresponding line bundle $\tilde{\mathcal C}_0$ on $\tilde{\mathcal X}_0$ is the gluing of $\mathcal O_{\tilde X_0}(\iota_*C_0)$, $\mathcal O_{W_1}$, and $\mathcal O_{W_2}$. Thus, $(\tilde{\mathcal D} \otimes \tilde{\mathcal C}_0) \big\vert_{W_i} = \tilde{\mathcal D}\big\vert_{W_i}$ for $i=1,2$. From this and (\ref{eq:ExactSeq_onReducibleSurface}), it can be immediately shown that $\chi(\tilde{\mathcal D} \otimes \tilde{\mathcal C}_0 ) - \chi(\tilde{\mathcal D}) = \chi(D + C_0 ) - \chi (D)$. If $D$ is a principal divisor on $Y$, then the previous equation says $\chi(C_0^\gen) = \chi(\tilde{\mathcal C}_0) = \chi(C_0) = 1$. Now, using Lemma~\ref{lem:EulerCharacteristics}(a), we deduce $(C_0^\gen. D^\gen) = \chi(D^\gen + C_0^\gen) - \chi(D^\gen) = \chi(\tilde{\mathcal D} \otimes \tilde{\mathcal C}_0 ) - \chi(\tilde{\mathcal D}) = \chi(D + C_0 ) - \chi (D) = (C_0.D)$. \qedhere \end{proof}
3,713
67,457
en
train
0.68.11
\begin{lemma}\label{lem:EulerCharacteristics} Let $S$ be a projective normal surface with $\chi(\mathcal O_S) = 1$. Assume that all the divisors below are supported on the smooth locus of $S$. Then, \begin{enumerate}[ref=(\alph{enumi})] \item $\chi(D_1 + D_2) = \chi(D_1) + \chi(D_2) + (D_1 . D_2) - 1$; \item $\chi(-D) = -\chi(D) + D^2 + 2$; \item $\chi(-D) = p_a(D)$ where $p_a(D)$ is the arithmetic genus of $D$; \item $\chi(nD) = n\chi(D) + \frac{1}{2} n(n-1)D^2 - n + 1$ for all $n \in \Z$. \item[\rm(d$'$)] $\chi(nD) = n^2\chi(D) + \frac{1}{2}n(n-1) (K_S .D) - n^2 + 1$ for all $n \in \Z$. \end{enumerate} Assume in addition that $D$ is an integral curve with $p_a(D) = 0$. Then \begin{enumerate}[resume] \item $\chi(D) = D^2 + 2$, $\chi(-D) = 0$; \item $\chi(nD) = \frac{1}{2}n(n+1)D^2 + (n+1)$ for all $n \in \Z$. \end{enumerate} \end{lemma} \begin{proof} All the formula in the statement are simple variants of the Riemann-Roch formula. \end{proof} \begin{lemma}\label{lem:Intersection_withFibers} Let $D$, $\tilde{\mathcal D}$, $D^\gen$ be as in Recipe~\ref{recipe:PicardLatticeOfDolgachev}. Then, $(C_0.D) = (C_0^\gen . D^\gen)$. \end{lemma} \begin{proof} Since $C_0$ does not intersect with $C_1,C_2,E_2$, the corresponding line bundle $\tilde{\mathcal C}_0$ on $\tilde{\mathcal X}_0$ is the gluing of $\mathcal O_{\tilde X_0}(\iota_*C_0)$, $\mathcal O_{W_1}$, and $\mathcal O_{W_2}$. Thus, $(\tilde{\mathcal D} \otimes \tilde{\mathcal C}_0) \big\vert_{W_i} = \tilde{\mathcal D}\big\vert_{W_i}$ for $i=1,2$. From this and (\ref{eq:ExactSeq_onReducibleSurface}), it can be immediately shown that $\chi(\tilde{\mathcal D} \otimes \tilde{\mathcal C}_0 ) - \chi(\tilde{\mathcal D}) = \chi(D + C_0 ) - \chi (D)$. If $D$ is a principal divisor on $Y$, then the previous equation says $\chi(C_0^\gen) = \chi(\tilde{\mathcal C}_0) = \chi(C_0) = 1$. Now, using Lemma~\ref{lem:EulerCharacteristics}(a), we deduce $(C_0^\gen. D^\gen) = \chi(D^\gen + C_0^\gen) - \chi(D^\gen) = \chi(\tilde{\mathcal D} \otimes \tilde{\mathcal C}_0 ) - \chi(\tilde{\mathcal D}) = \chi(D + C_0 ) - \chi (D) = (C_0.D)$. \qedhere \end{proof} \begin{definition} Let $H \in \Pic \P^2$ be a line, let $p \colon Y \to \P^2$ be the blow down morphism, and let $L = p^*(2H)$ be the proper transform of a general plane conic. Then, $( L . C_1) = 6$, $(L . C_2) = 6$ and $(L. E_2) = 0$. Let $L^\gen$ be the lifting of $L$. This means that there exists a line bundle $\tilde{\mathcal L}$ on the reducible surface $\tilde{\mathcal X}_0 = \tilde X_0 \cup W_1 \cup W_2$ such that \[ \tilde{\mathcal L}\big\vert_{\tilde X_0} = \mathcal O_{\tilde X_0}(\iota_* L),\quad \tilde{\mathcal L}\big\vert_{W_1} = \mathcal O_{W_1}(3),\ \text{and}\quad \tilde{\mathcal L}\big\vert_{W_2} = \mathcal O_{W_2}(4), \] which deforms to the line bundle $\mathcal O_{X^\gen}(L^\gen)$ on $X^\gen$. Let $F_{ij}^\gen \in \Pic X^\gen$ be the lifting of $F_i - F_j$, or equivalently, the divisor associated with the deformation of $\tilde{\mathcal F}_{ij}$\,(Table~\ref{table:ExcBundles_OnSingular}). We define \begin{align*} G_i^\gen &:= - L^\gen + 10K_{X^\gen} + F_{i9}^\gen \quad \text{for }i=1,\ldots,8; \\ G_9^\gen &:= -L^\gen + 11K_{X^\gen}. \end{align*} \end{definition} \begin{proposition}\label{prop:G_1to9} The divisors $G_1^\gen,\ldots,G_9^\gen$ satisfy the following numerical properties: \begin{enumerate} \item $\chi(G_i^\gen) = 1$ and $(G_i^\gen . K_{X^\gen}) = -1$; \item for $i < j$, $\chi(G_i^\gen - G_j^\gen)=0$. \end{enumerate} In particular, $(G_i^\gen)^2 = -1$ and $(G_i^\gen. G_j^\gen) = 0$ for $1 \leq i < j \leq 9$. \end{proposition} \begin{proof} First, consider the case $i \leq 8$. By Recipe~\ref{recipe:PicardLatticeOfDolgachev}\ref{item:recipe:CanonicalIntersection} and $K_{X^\gen}^2 = 0$, $(K_{X^\gen} . G_i^\gen) = \frac{1}{6}(C_0 \mathbin. -L + F_i - F_9) = -1$. Since the alternating sum of Euler characteristics in the sequence (\ref{eq:ExactSeq_onReducibleSurface}) is zero, we get the formula \begin{align*} \chi( \tilde{\mathcal L}^\vee \otimes \tilde{\mathcal F}_{i9}) ={}& \chi(-L + F_i - F_9) + \chi(\mathcal O_{W_1}(-3)) + \chi(\mathcal O_{W_2}(-4)) \\ &{}- \chi(\mathcal O_{Z_1}(-6)) - \chi(\mathcal O_{Z_2}(-6)). \end{align*} From this we compute $\chi( \tilde{\mathcal L}^\vee \otimes \tilde{\mathcal F}_{i9}) = 11$. The Riemann-Roch formula for $-L^\gen + F_{i9}^\gen = G_i^\gen - 10K_{X^\gen}$ says $(G_i^\gen - 10K_{X^\gen})^2 - (K_{X^\gen} \mathbin. G_i^\gen - K_{X^\gen} ) = 20$, hence $(G_i^\gen)^2 = -1$ and $\chi(G_i^\gen) = 1$. For $1 \leq i < j \leq 8$, $G_i - G_j = F_i - F_j$. Since $(F_i - F_j \mathbin. C_1 ) = (F_i - F_j \mathbin. C_2 ) = (F_i - F_j \mathbin. E_2 ) = 0$, the divisor $F_i - F_j$ lifts to the Cartier divisor $F_{ij}^\gen$. Hence, we can compute $\chi(G_i^\gen - G_j^\gen) = \chi(F_i - F_j) = 0$. This proves the statement for $i,j \leq 8$. The proof of the statement involving $G_9^\gen$ follows the same lines. Since $\chi(\tilde{\mathcal L}^\vee ) = 12$, $( G_9^\gen - 11K_{X^\gen} ) ^2 - (K_{X^\gen}\mathbin . G_9^\gen - 11K_{X^\gen}) = 22$. This leads to $(G_9^\gen)^2 = -1$. For $i \leq 8$, \begin{align*} \chi(G_i^\gen - G_9^\gen) ={}& \chi(F_i - F_9 - K_Y) + \chi(\mathcal O_{W_1}(-1)) + \chi(\mathcal O_{W_2}(-2)) \\ &{} - \chi(\mathcal O_{Z_1}(-2)) - \chi(\mathcal O_{Z_2}(-3)), \end{align*} and it is immediate to see that the right hand side is zero. \end{proof} We complete our list of generators of $\Pic X^\gen$ by introducing $G_{10}^\gen$. The choice of $G_{10}^\gen$ is motivated by the proof of the step (iii)${}\Rightarrow{}$(i) in \cite[Theorem~3.1]{Vial:Exceptional_NeronSeveriLattice}. \begin{proposition}\label{prop:G_10} Let $G_{10}^\gen$ be the $\Q$-divisor $\frac{1}{3}( G_1^\gen + G_2^\gen + \ldots + G_9^\gen - K_{X^\gen})$. Then, $G_{10}^\gen$ is Cartier. \end{proposition} \begin{proof} Since \[ \sum_{i=1}^9 G_i^\gen - K_{X^\gen} = - 9L^\gen + 90 K_{X^\gen} + \sum_{i=1}^8 F_{i9}^\gen, \] it suffices to prove that $\sum\limits_{i=1}^8 F_{i9}^\gen = 3D^\gen$ for some $D^\gen \in \Pic X^\gen$. Let $p \colon Y \to \P^2$ be the blowing up morphism and let $H$ be a line in $\P^2$. Since $K_Y = p^* (-3H) + F_1 + F_2 + \ldots + F_9 + E_1 + E_2 + 2E_3$, $K_Y - E_1 - E_2 - 2E_3 = p^*(-3H) + F_1 + \ldots + F_9 = -C_0$, so $F_1 + \ldots + F_9 = 3p^*H - C_0$. Consider the divisor $p^*H - 3F_9$ in $Y$. Clearly, the intersections of $(p^*H - 3F_9)$ with $C_1$, $C_2$, $E_2$ are all zero, hence $p^*H - 3F_9$ lifts to a Cartier divisor $(p^*H-3F_9)^\gen$ in $X^\gen$. Since \begin{align*} \sum_{i=1}^8 (F_i - F_9) &= \sum_{i=1}^9 F_i - 9F_9 \\ &= 3(p^*H - 3F_9) - C_0 \end{align*} and $C_0$ lifts to $6K_{X^\gen}$, $D^\gen := (p^*H - 3F_9)^\gen - 2K_{X^\gen}$ satisfies $\sum_{i=1}^8F_{i9}^\gen = 3D^\gen$. \end{proof} Combining the propositions \ref{prop:G_1to9} and \ref{prop:G_10}, we obtain:\nopagebreak \begin{theorem}\label{thm:Picard_ofGeneralFiber} The intersection matrix of divisors $\{G_i^\gen\}_{i=1}^{10}$ is \begin{equation} \Bigl( (G_i^\gen . G_j^\gen ) \Bigr)_{1 \leq i,j \leq 10} = \left[ \begin{array}{cccc} -1 & \cdots & 0 & 0 \\ \vdots & \ddots & \vdots & \vdots \\ 0 & \cdots & -1 & 0 \\ 0 & \cdots & 0 & 1 \end{array} \right]\raisebox{-2\baselineskip}[0pt][0pt]{.} \label{eq:IntersectionMatrix} \end{equation} In particular, the set $G:=\{G_i^\gen\}_{i=1}^{10}$ forms a $\Z$-basis of the N\'eron-Severi lattice $\op{NS}(X^\gen)$. By \cite[p.~137]{Dolgachev:AlgebraicSurfaces}, $\Pic X^\gen$ is torsion-free, thus $G$ forms a $\Z$-basis for $\Pic X^\gen$. \end{theorem} \begin{proof} We claim that the set of divisors $\{G_i^\gen\}_{i=1}^{10}$ generates the N\'eron-Severi lattice. By Hodge index theorem, there is a $\Z$-basis for $\op{NS}(X^\gen)$, say $\alpha = \{\alpha_i\}_{i=1}^{10}$, such that the intersection matrix with respect to $\{\alpha_i\}_{i=1}^{10}$ is the same as (\ref{eq:IntersectionMatrix}). Let $A = ( a_{ij} )_{1 \leq i,j \leq 10}$ be the integral matrix determined by \[ G_i^\gen = \sum_{j=1}^{10} a_{ij} \alpha_j. \] Given $v \in \op{NS}(X^\gen)$, let $[v]_\alpha$ be the column matrix of coordinates with respect to the basis $\alpha$. Then, $[G_i^\gen]_\alpha = Ae_i$ where $e_i$ is the $i$th column vector. For $1 \leq i,j \leq 10$, \[ (G_i^\gen . G_j^\gen) = (A e_i)^t E (A e_j) \] where $E$ is the intersection matrix with respect to the basis $\alpha$. The above equation implies that the intersection matrix with respect to the set $G$ is $A^{\rm t} E A$. Since the intersection matrices with respect to both $G$ and $\alpha$ are the same, $E = A^{\rm t} E A$. This implies that $1 = \det (A^{\rm t}A) = (\det A)^2$, hence $A$ is invertible over $\Z$. This proves that $G$ is a $\Z$-basis of $\op{NS}(X^\gen)$. The last statement on the Picard group follows immediately. \end{proof}
4,047
67,457
en
train
0.68.12
Combining the propositions \ref{prop:G_1to9} and \ref{prop:G_10}, we obtain:\nopagebreak \begin{theorem}\label{thm:Picard_ofGeneralFiber} The intersection matrix of divisors $\{G_i^\gen\}_{i=1}^{10}$ is \begin{equation} \Bigl( (G_i^\gen . G_j^\gen ) \Bigr)_{1 \leq i,j \leq 10} = \left[ \begin{array}{cccc} -1 & \cdots & 0 & 0 \\ \vdots & \ddots & \vdots & \vdots \\ 0 & \cdots & -1 & 0 \\ 0 & \cdots & 0 & 1 \end{array} \right]\raisebox{-2\baselineskip}[0pt][0pt]{.} \label{eq:IntersectionMatrix} \end{equation} In particular, the set $G:=\{G_i^\gen\}_{i=1}^{10}$ forms a $\Z$-basis of the N\'eron-Severi lattice $\op{NS}(X^\gen)$. By \cite[p.~137]{Dolgachev:AlgebraicSurfaces}, $\Pic X^\gen$ is torsion-free, thus $G$ forms a $\Z$-basis for $\Pic X^\gen$. \end{theorem} \begin{proof} We claim that the set of divisors $\{G_i^\gen\}_{i=1}^{10}$ generates the N\'eron-Severi lattice. By Hodge index theorem, there is a $\Z$-basis for $\op{NS}(X^\gen)$, say $\alpha = \{\alpha_i\}_{i=1}^{10}$, such that the intersection matrix with respect to $\{\alpha_i\}_{i=1}^{10}$ is the same as (\ref{eq:IntersectionMatrix}). Let $A = ( a_{ij} )_{1 \leq i,j \leq 10}$ be the integral matrix determined by \[ G_i^\gen = \sum_{j=1}^{10} a_{ij} \alpha_j. \] Given $v \in \op{NS}(X^\gen)$, let $[v]_\alpha$ be the column matrix of coordinates with respect to the basis $\alpha$. Then, $[G_i^\gen]_\alpha = Ae_i$ where $e_i$ is the $i$th column vector. For $1 \leq i,j \leq 10$, \[ (G_i^\gen . G_j^\gen) = (A e_i)^t E (A e_j) \] where $E$ is the intersection matrix with respect to the basis $\alpha$. The above equation implies that the intersection matrix with respect to the set $G$ is $A^{\rm t} E A$. Since the intersection matrices with respect to both $G$ and $\alpha$ are the same, $E = A^{\rm t} E A$. This implies that $1 = \det (A^{\rm t}A) = (\det A)^2$, hence $A$ is invertible over $\Z$. This proves that $G$ is a $\Z$-basis of $\op{NS}(X^\gen)$. The last statement on the Picard group follows immediately. \end{proof} We close the section with the summary of divisors on $X^\gen$. \begin{summary}\label{summary:Divisors_onX^g} Recall that $Y$ is the rational elliptic surface in Section~\ref{sec:Construction}, $p \colon Y \to \P^2$ is the blow down morphism, $H \in \Pic \P^2$ is a hyperplane divisor, and $\pi \colon Y \to X$ is the contraction of $C_1,\,C_2,\,E_2$. Then, \begin{enumerate}[label=\normalfont(\arabic{enumi})] \item $F_{ij}^\gen$\,($1\leq i,j\leq 9$) is the lifting of $F_i - F_j$; \item $(p^*H-3F_9)^\gen$ is the lifting of $p^*H - 3F_9$; \item $L^\gen$ is the lifting of $p^*(2H)$; \item $G_i^\gen = - L^\gen + 10 K_{X^\gen} + F_{i9}^\gen$ for $i=1,\ldots,8$; \item $G_9^\gen = -L^\gen + 11K_{X^\gen}$; \item $G_{10}^\gen = -3L^\gen + (p^*H - 3F_9)^\gen + 28K_{X^\gen}$. \end{enumerate} \end{summary} \section{Exceptional collections of maximal length on Dolgachev surfaces of type $(2,3)$}\label{sec:ExcepCollectMaxLength} \subsection{Exceptional collections of maximal length} We continue to study the case $(n,a)=(3,1)$. Throughout this section, we will prove that there exists an exceptional collection of maximal length in $\D^{\rm b}(X^\gen)$. Proving exceptionality of a given collection usually consists of numerous cohomology computations, so we begin by introducing some computational machineries. \begin{lemma}\label{lem:DummyBundle} The liftings $C_1^\gen$, $(2C_2+E_2)^\gen$ exist and they are the zero divisors in $X^\gen$. \end{lemma} \begin{proof} Let $\tilde{\mathcal C}_1$ be the gluing of line bundles $\mathcal O_{\tilde X_0}(\iota_* C_1)$, $\mathcal O_{W_1}(-2)$, and $\mathcal O_{W_2}$, and let $\mathcal O_{X^\gen}(C_1^\gen)$ be its deformation. It is immediate to see that $\chi(C_1^\gen) = 1$ and $\chi(-C_1^\gen)=1$. By Riemann-Roch formula, $(C_1^\gen)^2 = (C_1^\gen . K_{X^\gen}) = 0$. For $i \leq 8$, \begin{align*} \chi(G_i^\gen - 10K_{X^\gen} - C_1^\gen ) ={}& \chi( \tilde{\mathcal L}^\vee \otimes \tilde{\mathcal F}_{i9} \otimes \tilde{\mathcal C}_1^\vee ) \\ ={}& \chi(-L + F_i - F_9 - C_1) + \chi(\mathcal O_{W_1}(-1)) + \chi(\mathcal O_{W_2}(-4)) \\ &{} - \chi(\mathcal O_{Z_1}(-2)) - \chi(\mathcal O_{Z_2}(-6)), \end{align*} which yields $\chi(G_i^\gen - 10K_{X^\gen} - C_1^\gen )=11$. By the Riemann-Roch, $(G_i^\gen - 10K_{X^\gen}- C_1^\gen)^2 - (K_{X^\gen} \mathbin . G_i^\gen - 10K_{X^\gen} - C_1^\gen) = 2\chi(G_i^\gen -10K_{X^\gen} - C_1^\gen)-2 = 20$. The left hand side is $-2(G_i^\gen . C_1^\gen) + 20$, thus $(G_i^\gen . C_1^\gen) =0$. Since $(C_1^\gen . K_{X^\gen}) = 0$ and $3G_{10}^\gen = G_1^\gen + \ldots + G_9^\gen - K_{X^\gen}$, $(G_{10}^\gen . C_1^\gen) = 0$. Hence, $C_1^\gen$ is numerically trivial by Theorem~\ref{thm:Picard_ofGeneralFiber}. This shows that $C_1^\gen$ is trivial since there is no torsion in $\Pic X^\gen$. Exactly the same argument holds for the lifting of $2C_2 + E_2$. \end{proof}
2,118
67,457
en
train
0.68.13
We close the section with the summary of divisors on $X^\gen$. \begin{summary}\label{summary:Divisors_onX^g} Recall that $Y$ is the rational elliptic surface in Section~\ref{sec:Construction}, $p \colon Y \to \P^2$ is the blow down morphism, $H \in \Pic \P^2$ is a hyperplane divisor, and $\pi \colon Y \to X$ is the contraction of $C_1,\,C_2,\,E_2$. Then, \begin{enumerate}[label=\normalfont(\arabic{enumi})] \item $F_{ij}^\gen$\,($1\leq i,j\leq 9$) is the lifting of $F_i - F_j$; \item $(p^*H-3F_9)^\gen$ is the lifting of $p^*H - 3F_9$; \item $L^\gen$ is the lifting of $p^*(2H)$; \item $G_i^\gen = - L^\gen + 10 K_{X^\gen} + F_{i9}^\gen$ for $i=1,\ldots,8$; \item $G_9^\gen = -L^\gen + 11K_{X^\gen}$; \item $G_{10}^\gen = -3L^\gen + (p^*H - 3F_9)^\gen + 28K_{X^\gen}$. \end{enumerate} \end{summary} \section{Exceptional collections of maximal length on Dolgachev surfaces of type $(2,3)$}\label{sec:ExcepCollectMaxLength} \subsection{Exceptional collections of maximal length} We continue to study the case $(n,a)=(3,1)$. Throughout this section, we will prove that there exists an exceptional collection of maximal length in $\D^{\rm b}(X^\gen)$. Proving exceptionality of a given collection usually consists of numerous cohomology computations, so we begin by introducing some computational machineries. \begin{lemma}\label{lem:DummyBundle} The liftings $C_1^\gen$, $(2C_2+E_2)^\gen$ exist and they are the zero divisors in $X^\gen$. \end{lemma} \begin{proof} Let $\tilde{\mathcal C}_1$ be the gluing of line bundles $\mathcal O_{\tilde X_0}(\iota_* C_1)$, $\mathcal O_{W_1}(-2)$, and $\mathcal O_{W_2}$, and let $\mathcal O_{X^\gen}(C_1^\gen)$ be its deformation. It is immediate to see that $\chi(C_1^\gen) = 1$ and $\chi(-C_1^\gen)=1$. By Riemann-Roch formula, $(C_1^\gen)^2 = (C_1^\gen . K_{X^\gen}) = 0$. For $i \leq 8$, \begin{align*} \chi(G_i^\gen - 10K_{X^\gen} - C_1^\gen ) ={}& \chi( \tilde{\mathcal L}^\vee \otimes \tilde{\mathcal F}_{i9} \otimes \tilde{\mathcal C}_1^\vee ) \\ ={}& \chi(-L + F_i - F_9 - C_1) + \chi(\mathcal O_{W_1}(-1)) + \chi(\mathcal O_{W_2}(-4)) \\ &{} - \chi(\mathcal O_{Z_1}(-2)) - \chi(\mathcal O_{Z_2}(-6)), \end{align*} which yields $\chi(G_i^\gen - 10K_{X^\gen} - C_1^\gen )=11$. By the Riemann-Roch, $(G_i^\gen - 10K_{X^\gen}- C_1^\gen)^2 - (K_{X^\gen} \mathbin . G_i^\gen - 10K_{X^\gen} - C_1^\gen) = 2\chi(G_i^\gen -10K_{X^\gen} - C_1^\gen)-2 = 20$. The left hand side is $-2(G_i^\gen . C_1^\gen) + 20$, thus $(G_i^\gen . C_1^\gen) =0$. Since $(C_1^\gen . K_{X^\gen}) = 0$ and $3G_{10}^\gen = G_1^\gen + \ldots + G_9^\gen - K_{X^\gen}$, $(G_{10}^\gen . C_1^\gen) = 0$. Hence, $C_1^\gen$ is numerically trivial by Theorem~\ref{thm:Picard_ofGeneralFiber}. This shows that $C_1^\gen$ is trivial since there is no torsion in $\Pic X^\gen$. Exactly the same argument holds for the lifting of $2C_2 + E_2$. \end{proof} \begin{example}\label{eg:DivisorVaries_onSingular} Since $C_0$ lifts to $6K_{X^\gen}$, $2E_1 = C_0 - C_1$ lifts to $6K_{X^\gen}$. Thus $E_1$ lifts to $3K_{X^\gen}$. Similarly, $C_2 + E_2 + E_3$ lifts to $2K_{X^\gen}$. Hence, $K_Y = E_1 - C_2 - E_2 - E_3$ lifts to $3K_{X^\gen} - 2K_{X^\gen} = K_{X^\gen}$. Also, $(E_2 + 2E_3) - E_1$ lifts to $K_{X^\gen}$, whereas $K_Y$ and $(E_2+2E_3)-E_1$ are different in $\Pic Y$. These are essentially due to Lemma~\ref{lem:DummyBundle}. For instance, we have \begin{align*} (E_2 + 2E_3) - E_1 - K_Y &= - 2 E_1 + C_2 + 2E_2 + 3E_3 \\ &= -C_1, \end{align*} thus $(E_2 + 2E_3)^\gen - E_1^\gen - K_{X^\gen} = -C_1^\gen = 0$. \end{example} As Example~\ref{eg:DivisorVaries_onSingular} presents, there are free spaces to choose $D \in \Pic Y$ given a fixed divisor $D^\gen \in \Pic X^\gen$. The following lemma gives a direction to choose $D$. Note that the lemma requires assumptions on $(D.C_1)$ and $(D.C_2)$, but Lemma~\ref{lem:DummyBundle} provides the way to adjust those numbers. \begin{lemma}\label{lem:H0Computation} Let $D$ be a divisor in $Y$ such that $(D.C_1) = 2d_1 \in 2\Z$, $(D.C_2) = 3d_2 \in 3\Z$, and $(D . E_2) = 0$. Let $D^\gen$ be the lifting of $D$. Then, \[ h^0(X^\gen, D^\gen) \leq h^0(Y,D) + h^0(\mathcal O_{W_1}(d_1)) + h^0(\mathcal O_{W_2}(2d_2)) - h^0(\mathcal O_{Z_1}(2d_1)) - h^0(\mathcal O_{Z_2}(3d_3)). \] In particular, if $d_1,d_2 \leq 1$, then $h^0(X^\gen, D^\gen) \leq h^0(Y,D)$. \end{lemma} \begin{proof} Since $(D. E_2) = 0$, we have $H^p(\tilde X_0,\iota_*D) \simeq H^p(Y,D)$ for all $p \geq 0$\,(Proposition~\ref{prop:CohomologyComparison_YtoX}). Recall that there exists a short exact sequence (introduced in (\ref{eq:ExactSeq_onReducibleSurface})) \begin{equation} 0 \to \tilde{\mathcal D} \to \mathcal O_{\tilde X_0}(\iota_*D) \oplus \mathcal O_{W_1}(d_1) \oplus \mathcal O_{W_2}(2d_2) \to \mathcal O_{Z_1}(2d_1) \oplus \mathcal O_{Z_2}(3d_2) \to 0, \label{eq:ExactSeq_onReducibleSurface_SimplerVer} \end{equation} where $\tilde{\mathcal D}$ is the line bundle constructed in Proposition~\ref{prop:VectorBundleOnReducibleSurface}, and the notations $W_i$, $Z_i$ are explained in (\ref{eq:SecondWtdBlowupExceptional}). We first claim the following: if $d_1,d_2 \leq 1$, then the maps $H^0(\mathcal O_{W_1}(d_1)) \to H^0(\mathcal O_{Z_1}(2d_1))$ and $H^0(\mathcal O_{W_2}(2d_2)) \to H^0(\mathcal O_{Z_2}(3d_2))$ are isomorphisms. Only the nontrivial cases are $d_1 = 1$ and $d_2 = 1$. Since $Z_1$ is a smooth conic in $W_1 = \P^2$, there is a short exact sequence \[ 0 \to \mathcal O_{W_1}(-1) \to \mathcal O_{W_1}(1) \to \mathcal O_{Z_1}(2) \to 0. \] All the cohomology groups of $\mathcal O_{W_1}(-1)$ vanish, so $H^p( \mathcal O_{W_1}(1)) \simeq H^p(\mathcal O_{Z_1}(2))$ for all $p \geq 0$. In the case $d_2 = 1$, we consider \[ 0 \to \mathcal I_{Z_2}(2) \to \mathcal O_{W_2}(2) \to \mathcal O_{Z_2}(3) \to 0, \] where $\mathcal I_{Z_2} \subset \mathcal O_{W_2}$ is the ideal sheaf of the closed subscheme $Z_2 = (xy = z ^3 ) \subset \P_{x,y,z}(1,2,1)$. The ideal $(xy - z^3)$ does not contain any nonzero homogeneous element of degree $2$, so $H^0(\mathcal I_{Z_2}(2)) = 0$. This shows that $H^0( \mathcal O_{W_2}(2)) \to H^0(\mathcal O_{Z_2}(3))$ is injective. Furthermore, $H^0(\mathcal O_{W_2}(2))$ is generated by $x^2, xz, z^2, y$, hence $h^0(\mathcal O_{W_2}(2)) = h^0(\mathcal O_{Z_3}(3)) = 4$. This proves that $H^0(\mathcal O_{W_2}(2)) \simeq H^0(\mathcal O_{Z_3}(3))$, as desired. If $d_1,d_2>1$, it is clear that $H^0(\mathcal O_{W_1}(d_1)) \to H^0(\mathcal O_{Z_1}(2d_1))$ and $H^0(\mathcal O_{W_2}(2d_2)) \to H^0(\mathcal O_{Z_2}(3d_2))$ are surjective. The cohomology long exact sequence of (\ref{eq:ExactSeq_onReducibleSurface_SimplerVer}) begins with \begin{align*} 0 \to H^0(\tilde{\mathcal D}) \to H^0(\iota_*D) \oplus H^0( \mathcal O_{W_1}(d_1) ) \oplus H^0(\mathcal O_{W_2}(2d_2)) \\ \to H^0(\mathcal O_{Z_1}(2d_1)) \oplus H^0( \mathcal O_{Z_2}(3d_2)).\qquad\qquad \end{align*} By the previous arguments, the last map is surjective. Indeed, the image of $(0, s_1, s_2) \in H^0(\iota_*D) \oplus H^0( \mathcal O_{W_1}(d_1) ) \oplus H^0(\mathcal O_{W_2}(2d_2))$ is $(-s_1\big\vert_{Z_1}, -s_2\big\vert_{Z_2})$. The upper-semicontinuity of cohomologies establishes the inequality in the statement. \qedhere \end{proof}
3,291
67,457
en
train
0.68.14
By \cite[Theorem~3.1]{Vial:Exceptional_NeronSeveriLattice}, it can be shown that the collection (\ref{eq:ExcColl_MaxLength}) in the theorem below is a numerically exceptional collection. Our aim is to prove that (\ref{eq:ExcColl_MaxLength}) is indeed an exceptional collection in $\D^{\rm b}(X^\gen)$. Before proceed to the theorem, we introduce one terminology. \begin{definition} During the construction of $Y$, the node of $p_*C_2$ is blown up twice, which corresponds to one of the two tangent directions\footnote{For example, the nodal curve $y^2 = x^3+x^2z$ has two tangent directions $y=\pm x$ at $(0,0,1)$.} at the node of $p_*C_2$. We refer to the tangent direction corresponding to the second blow up as the \emph{distinguished tangent direction} at the node of $p_*C_2$. \end{definition} \begin{theorem}\label{thm:ExceptCollection_MaxLength} Suppose $X^\gen$ is originated from a cubic pencil $\lvert \lambda p_* C_1 + \mu p_*C_2 \rvert$ which is generated by two general plane nodal cubics. Let $G_1^\gen,\ldots,G_{10}^\gen$ be as in Summary~\ref{summary:Divisors_onX^g}, let $G_0^\gen$ be the zero divisor, and let $G_{11}^\gen = 2G_{10}^\gen$. For notational simplicity, we denote the rank of $\Ext^p(G_i^\gen, G_j^\gen)(=H^p(-G_i^\gen+G_j^\gen))$ by $h^p_{ij}$. The values of $h_{ij}^p$ are described below. For example, the triple of ($G_9^\gen$-row, $G_{10}^\gen$-column), which is $(0\ 0\ 2)$, means that $(h^0_{9,10},\, h^1_{9,10},\, h^2_{9,10}) = (0,0,2)$. \begin{equation} \scalebox{0.9}{$ \begin{array}{c|ccccc} & G_0^\gen & G_{1 \leq i \leq 8}^\gen & G_9^\gen & G_{10}^\gen & G_{11}^\gen \\[2pt] \hline G_0^\gen & 1\,0\,0 & 0\,0\,1 & 0\,0\,1 & 0\,0\,3 & 0\,0\,6 \\ G_{1 \leq i \leq 8}^\gen & & 1\,0\,0 & & 0\,0\,2 & 0\,0\,5\\ G_9^\gen & & & 1\,0\,0 & 0\,0\,2 & 0\,0\,5 \\ G_{10}^\gen & & & & 1\,0\,0 & 0\,0\,3 \\ G_{11}^\gen & & & & & 1\,0\,0 \end{array} $}\label{eq:thm:ExceptCollection_MaxLength} \end{equation} The symbol $G_{1 \leq i \leq 8}^\gen$ means $G_i^\gen$ for each $i=1,2,\ldots,8$. The blanks stand for $0\,0\,0$, and $h^p_{ij} = 0$ for all $p$ and $1 \leq i\neq j \leq 8$. In particular, the collection \begin{equation} \big\langle \mathcal O_{X^\gen}(G_0^\gen),\ \mathcal O_{X^\gen}(G_1^\gen),\ \ldots,\ \mathcal O_{X^\gen}(G_{10}^\gen),\ \mathcal O_{X^\gen}(G_{11}^\gen) \big\rangle \label{eq:ExcColl_MaxLength} \end{equation} is an exceptional collection of length $12$ in $\D^{\rm b}(X^\gen)$. \end{theorem} \begin{proof} Recall that (Summary~\ref{summary:Divisors_onX^g}) \begin{align*} G_i^\gen &= -L^\gen + F_{i9}^\gen + 10K_{X^\gen},\ i=1,\ldots,8; \\ G_9^\gen &= -L^\gen + 11K_{X^\gen}; \\ G_{10}^\gen &= -3L^\gen + (p^*H - 3F_9)^\gen + 28K_{X^\gen}; \\ G_{11}^\gen &= -6L^\gen + 2(p^*H - 3F_9)^\gen + 56K_{X^\gen}. \end{align*} The proof consists of numerous cohomology vanishings for which we divide into several steps. Note that we can always evaluate $\chi(-G_i^\gen + G_j^\gen) = \sum_p (-1)^p h^p_{ij}$, thus it suffices to compute only two (mostly $h^0$ and $h^2$) of $\{h^p_{ij} : p=0,1,2\}$. In the first part of the proof, we deduce the following using numerical methods. \begin{equation} \scalebox{0.9}{$ \begin{array}{c|ccccc} & G_0^\gen & G_{1 \leq i \leq 8}^\gen & G_9^\gen & G_{10}^\gen & G_{11}^\gen \\[2pt] \hline G_0^\gen & 1\,0\,0 & 0\,0\,1 & 0\,0\,1 & \scriptstyle\chi=3 & \scriptstyle\chi=6 \\ G_{1 \leq i \leq 8}^\gen & 0\,0\,0 & 1\,0\,0 & 0\,0\,0 & 0\,0\,2 & \scriptstyle\chi=5 \\ G_9^\gen &\scriptstyle\chi=0 & 0\,0\,0 & 1\,0\,0 & 0\,0\,2 & \scriptstyle\chi=5 \\ G_{10}^\gen &\scriptstyle\chi=0 &\scriptstyle\chi=0 &\scriptstyle\chi=0 & 1\,0\,0 & \scriptstyle\chi=3 \\ G_{11}^\gen &\scriptstyle\chi=0 &\scriptstyle\chi=0 &\scriptstyle\chi=0 &\scriptstyle\chi=0 & 1\,0\,0 \end{array} $}\label{eq:HumanPart_thm:ExceptCollection_MaxLength} \end{equation} The slots with $\chi=d$ means $\chi(-G_i^\gen+G_j^\gen)=\sum_{p}(-1)^ph^p_{ij} =d$. For those slots, we do not compute each $h^p_{ij}$ for the moment. In the end, they will be completed through another approach. \begin{enumerate}[fullwidth, itemsep=5pt minus 3pt, label=\bf{}Step~\arabic{enumi}., ref=\arabic{enumi}] \item\label{item:NumericalStep_thm:ExceptCollection_MaxLength} As explained above, the collection (\ref{eq:ExcColl_MaxLength}) is numerically exceptional, hence $\chi(-G_i^\gen + G_j^\gen) = \sum_p (-1)^p h^p_{ij}=0$ for all $0 \leq j < i \leq 11$. Furthermore, the surface $X^\gen$ is minimal, thus $K_{X^\gen}$ is nef. It follows that $h^0(D^\gen) = 0$ if $D^\gen$ is $K_{X^\gen}$-negative, and $h^2(D^\gen)=0$ if $D^\gen$ is $K_{X^\gen}$-positive. Since \[ (K_{X^\gen} . G_i^\gen) = \left\{ \begin{array}{ll} -1 & i \leq 9 \\ -3 & i = 10 \\ -6 & i = 11, \end{array} \right. \] these already enforce a number of cohomologies to be zero. Indeed, all the numbers in the following list are zero: \[ \{ h^0_{0i} \}_{i \leq 11},\ \{ h^0_{i,10}, h^0_{i,11} \}_{i \leq 9},\ \{ h^2_{i0} \}_{i \leq 11},\ \{ h^2_{10,i}, h^2_{11,i} \}_{i \leq 9}. \] Also, since $G_{11}^\gen = 2G_{10}^\gen$, $h^0_{10,11} = h^0_{0,10} = 0$ and $h^2_{11,10} = h^2_{10,0} = 0$. \item\label{item:ProofFreePart_thm:ExceptCollection_MaxLength} If $1 \leq j \neq i \leq 8$, then $-G_i^\gen + G_j^\gen$ can be realized as the lifting of $-F_i + F_j$. Hence, \[ \bigl\langle \mathcal O_{X^\gen}(G_1^\gen),\ \ldots,\ \mathcal O_{X^\gen}(G_8^\gen) \bigr\rangle \] is an exceptional collection by Proposition~\ref{prop:ExceptCollection_ofLengthNine}. This proves that $h^p_{ij}=0$ for all $p \geq 0$ and $1 \leq i \neq j \leq 8$. Also, $-G_9^\gen + G_i^\gen = -K_{X^\gen} + F_{i9}^\gen$ for $1 \leq i \leq 8$. Remark~\ref{rmk:ExceptCollection_SerreDuality} shows that $h^p_{9i} = h^p(-K_{X^\gen} + F_{i9}^\gen)=0$ for $p \geq 0$ and $1 \leq i \leq 8$. Furthermore, by Serre duality, $h^p_{i9} = h^{2-p}(F_{i9}^\gen)=0$ for all $p \geq 0$ and $1 \leq i \leq 8$. \item\label{item:Strategy_HumanPart_thm:ExceptCollection_MaxLength} We verify (\ref{eq:HumanPart_thm:ExceptCollection_MaxLength}) using the following strategy: \begin{enumerate} \item If we want to compute $h^0_{ij}$, then pick $D_{ij}^\gen := -G_i^\gen + G_j^\gen$. If the aim is to evaluate $h^2_{ij}$, then take $D_{ij}^\gen := K_{X^\gen} + G_i^\gen - G_j^\gen$, so that $h^2_{ij} = h^0(D_{ij}^\gen)$ by Serre duality. \item Express $D^\gen_{ij}$ in terms of $L^\gen$, $(p^*H - 3F_9)^\gen$, $F_{i9}^\gen$, and $K_{X^\gen}$. Via Summary~\ref{summary:Divisors_onX^g}, we can translate $L^\gen$, $(p^*H - 3F_9)^\gen$, $F_{i9}^\gen$ into the divisors on $Y$. Further, we have $6K_{X^\gen} = C_0^\gen$, $3K_{X^\gen} = E_1^\gen$, and $2K_{X^\gen} = (C_2+E_2+E_3)^\gen$, thus an arbitrary integer multiple of $K_{X^\gen}$ also can be translated into divisors on $Y$. Together with these translations, use Lemma~\ref{lem:DummyBundle} to find a Cartier divisor $D_{ij}$ on $Y$ which lifts to $D_{ij}^\gen$, and which satisfies $(D_{ij}.C_1) \leq 2$, $(D_{ij}.C_2) \leq 3$, $(D_{ij}.E_2)=0$. \item Compute an upper bound of $h^0(D_{ij})$. Then by Lemma~\ref{lem:H0Computation}, $h^0(D_{ij}^\gen) \leq h^0(D_{ij})$. \item In any occasions, we will find that the upper bound obtained in (3) coincides with $\chi(-G_i^\gen + G_j^\gen)$. Also, at least one of $\{ h^0_{ij}, h^2_{ij}\}$ is zero by Step~\ref{item:NumericalStep_thm:ExceptCollection_MaxLength}. From this we deduce $h^0(D_{ij}^\gen)\geq{}$(the upper bound obtained in (3)), hence the equality holds. Consequently, the numbers $\{h^p_{ij} : p=0,1,2\}$ are evaluated. \end{enumerate}
3,443
67,457
en
train
0.68.15
\item\label{item:ProofFreePart_thm:ExceptCollection_MaxLength} If $1 \leq j \neq i \leq 8$, then $-G_i^\gen + G_j^\gen$ can be realized as the lifting of $-F_i + F_j$. Hence, \[ \bigl\langle \mathcal O_{X^\gen}(G_1^\gen),\ \ldots,\ \mathcal O_{X^\gen}(G_8^\gen) \bigr\rangle \] is an exceptional collection by Proposition~\ref{prop:ExceptCollection_ofLengthNine}. This proves that $h^p_{ij}=0$ for all $p \geq 0$ and $1 \leq i \neq j \leq 8$. Also, $-G_9^\gen + G_i^\gen = -K_{X^\gen} + F_{i9}^\gen$ for $1 \leq i \leq 8$. Remark~\ref{rmk:ExceptCollection_SerreDuality} shows that $h^p_{9i} = h^p(-K_{X^\gen} + F_{i9}^\gen)=0$ for $p \geq 0$ and $1 \leq i \leq 8$. Furthermore, by Serre duality, $h^p_{i9} = h^{2-p}(F_{i9}^\gen)=0$ for all $p \geq 0$ and $1 \leq i \leq 8$. \item\label{item:Strategy_HumanPart_thm:ExceptCollection_MaxLength} We verify (\ref{eq:HumanPart_thm:ExceptCollection_MaxLength}) using the following strategy: \begin{enumerate} \item If we want to compute $h^0_{ij}$, then pick $D_{ij}^\gen := -G_i^\gen + G_j^\gen$. If the aim is to evaluate $h^2_{ij}$, then take $D_{ij}^\gen := K_{X^\gen} + G_i^\gen - G_j^\gen$, so that $h^2_{ij} = h^0(D_{ij}^\gen)$ by Serre duality. \item Express $D^\gen_{ij}$ in terms of $L^\gen$, $(p^*H - 3F_9)^\gen$, $F_{i9}^\gen$, and $K_{X^\gen}$. Via Summary~\ref{summary:Divisors_onX^g}, we can translate $L^\gen$, $(p^*H - 3F_9)^\gen$, $F_{i9}^\gen$ into the divisors on $Y$. Further, we have $6K_{X^\gen} = C_0^\gen$, $3K_{X^\gen} = E_1^\gen$, and $2K_{X^\gen} = (C_2+E_2+E_3)^\gen$, thus an arbitrary integer multiple of $K_{X^\gen}$ also can be translated into divisors on $Y$. Together with these translations, use Lemma~\ref{lem:DummyBundle} to find a Cartier divisor $D_{ij}$ on $Y$ which lifts to $D_{ij}^\gen$, and which satisfies $(D_{ij}.C_1) \leq 2$, $(D_{ij}.C_2) \leq 3$, $(D_{ij}.E_2)=0$. \item Compute an upper bound of $h^0(D_{ij})$. Then by Lemma~\ref{lem:H0Computation}, $h^0(D_{ij}^\gen) \leq h^0(D_{ij})$. \item In any occasions, we will find that the upper bound obtained in (3) coincides with $\chi(-G_i^\gen + G_j^\gen)$. Also, at least one of $\{ h^0_{ij}, h^2_{ij}\}$ is zero by Step~\ref{item:NumericalStep_thm:ExceptCollection_MaxLength}. From this we deduce $h^0(D_{ij}^\gen)\geq{}$(the upper bound obtained in (3)), hence the equality holds. Consequently, the numbers $\{h^p_{ij} : p=0,1,2\}$ are evaluated. \end{enumerate} \item We follow the strategy in Step~\ref{item:Strategy_HumanPart_thm:ExceptCollection_MaxLength} to verify (\ref{eq:HumanPart_thm:ExceptCollection_MaxLength}). Let $i \in \{1,\ldots,8\}$. To verify $h^0_{i0}=0$, we take $D_{i0}^\gen = -G_i^\gen = L^\gen - F_{i9}^\gen - 10K_{X^\gen}$. Translation into the divisors on $Y$ gives: \[ D_{i0}' = p^*(2H) + F_9-F_i - 2C_0 + (C_2+E_2+E_3) \] Since $(D_{i0}'.C_1) = 6$ and $(D_{i0}' . C_2) = 3$, we replace the divisor $D_{i0}'$ by $D_{i0} := D_{i0}' + C_1$ so that the condition $(D_{i0}.C_1) \leq 2$ is fulfilled. Now, $h^0(D_{i0})=0$ by Dictionary~\ref{dictionary:H0Computations}\ref{dictionary:-G_i}, thus $h^0_{i0} \leq h^0(D_{i0}) = 0$ by Lemma~\ref{lem:H0Computation}. Finally, $\chi(-G_i^\gen)=0$ and $h^2_{i0}=0$\,(Step~\ref{item:NumericalStep_thm:ExceptCollection_MaxLength}), hence $h^1_{i0}=0$. We repeat this routine to the following divisors: \begin{align*} D_{0i} &= p^*(2H) + F_9 - F_i - C_0 + C_1 - E_1 + (2C_2+E_2); \\ D_{09} &= p^*(2H) - 2C_0 + C_1 + (C_2 + E_2 + E_3); \\ D_{i,10} &= p^*(3H) + 2F_9 + F_i - 2C_0 + 2C_1 - E_1 - (C_2 + E_2 + E_3) + 2(2C_2+E_2); \\ D_{9,10} &= p^*(3H) + 3F_9 - 3C_0 + 3C_1 + (C_2+E_2+E_3) + (2C_2+E_2). \end{align*} Together with Dictionary~\ref{dictionary:H0Computations}\hyperref[dictionary:Nonvanish_K-G_i]{(2--5)}, all the slots of (\ref{eq:HumanPart_thm:ExceptCollection_MaxLength}) are verified. \item\label{item:M2Part_thm:ExceptCollection_MaxLength} It is difficult to complete (\ref{eq:thm:ExceptCollection_MaxLength}) using the numerical argument\,(see for example, Remark~\ref{rmk:Configuration_andCohomology}). We introduce another plan to overcome these difficulties. \begin{enumerate} \item Take $D_{ij}^\gen \in \Pic X^\gen$ and $D_{ij} \in \Pic Y$ as in Step~\hyperref[item:Strategy_HumanPart_thm:ExceptCollection_MaxLength]{3(1--2)}. We may assume $(D_{ij}.C_1) \in \{0,2\}$ and $(D_{ij}.C_2) \in \{-3,0,3\}$. If $(D_{ij}.C_2) = -3$, then $(D_{ij} - C_2 \mathbin. E_2) = -1$, thus $h^0(D_{ij}) = h^0(D_{ij} - C_2 - E_2)$. Hence, we replace $D_{ij}$ by $D_{ij} - C_2 -E_2$ if $(D.C_2) = -3$. In some occasions, we have $(D_{ij} . F_9) = -1$. We make further replacement $D_{ij} \mapsto D_{ij}-F_9$ for those cases. \item Rewrite $D_{ij}$ in terms of the $\Z$-basis $\{p^*H, F_1,\ldots, F_9, E_1,E_2,E_3\}$ so that $D_{ij}$ is expressed in the following form: \[ D_{ij} = p^*(dH) - \bigl( \text{sum of exceptional curves of }p \colon Y \to \P^2\bigr). \] \item\label{item:PlaneCurveExistence_thm:ExceptCollection_MaxLength} If $h^0(D_{ij}) > 0$, then we consider an effective divisor $D$ which is linearly equivalent to $D_{ij}$. Then, $p_*D$ is the plane curve of degree $d$ which satisfied the conditions imposed by the exceptional part of $D_{ij}$. Let $\mathcal I_{\rm C} \subset \mathcal O_{\P^2}$ be the ideal sheaf associated with the imposed conditions on $p_*D$. Then the curve $p_*D$ contributes to the number $h^0(\mathcal O_{\P^2}(d) \otimes \mathcal I_{\rm C})$. Indeed, this number gives an upper bound of $h^0(D_{ij})$\,(it is clear that if $D'$ is an effective divisor linearly equivalent to $D$ such that $p_*D$ and $p_*D'$ coincide as plane curves, then $D$ and $D'$ must be the same curve in $Y$). \item As in Step~\hyperref[item:Strategy_HumanPart_thm:ExceptCollection_MaxLength]{3(4)}, we will see that all the upper bounds $h^0(D_{ij})$ coincide with the numerical invariants $\chi(-G_i^\gen + G_j^\gen)$. Thus, the upper bounds $h^0(D_{ij})$ obtained in (3) determine $\{h^p_{ij} : p=0,1,2\}$ precisely. \end{enumerate} \item As explained in Remark~\ref{rmk:Configuration_andCohomology}, the value $h^0(D_{ij})$ might depend on the configuration of $p_*C_1$ and $p_*C_2$. However, for general $p_*C_1 = (h_1=0)$, $p_*C_2 = (h_2=0)$, the minimum value of $h^0(D_{ij})$ is attained. This can be observed in the following way. Let $h = \sum_{\alpha} a_\alpha \mathbf{x}^\alpha$ be a homogeneous equation of degree $d$, where the sum is taken over the $3$-tuples $\alpha = (\alpha_x, \alpha_y, \alpha_z)$ with $\alpha_x + \alpha_y + \alpha_z = d$ and $\mathbf{x}^\alpha = x^{\alpha_x} y^{\alpha_y} z^{\alpha_z}$. Then the ideal $\mathcal I_{\rm C}$ imposes linear relations on $\{a_\alpha\}_\alpha$, thus we get a linear system, or equivalently, a matrix $M$, with the variables $\{a_\alpha\}_\alpha$. After perturbing $h_1$ and $h_2$, the rank of $M$ would not decrease since \{$\op{rank} M \geq r_0$\} is an open condition for any fixed $r_0$. From this we conclude: if $h^0(D_{ij}) \leq r$ for at least one pair of $p_*C_1$ and $p_*C_2$, then $h^0(D_{ij}) \leq r$ for general $p_*C_1$ and $p_*C_2$. \item\label{item:M2Configuration_thm:ExceptCollection_MaxLength} Let $h_1 = (y-z)^2z - x^3 - x^2z$ and $h_2 = x^3 - 2xy^2 + 2xyz + y^2z$. These equations define plane nodal cubics such that \begin{enumerate} \item $p_*C_1$ has the node at $[0,1,1]$, and $p_*C_2$ has the node at $[0,0,1]$; \item $p_*C_2$ has two tangent directions ($y=0$ and $y=-2x$) at nodes; \item $p_*C_1 \cap p_*C_2$ contains two $\Q$-rational points, namely $[0,1,0]$ and $[-1,1,1]$. \end{enumerate} We take $y=0$ as the distinguished tangent direction at the node of $p_*C_2$, and take $p_*F_9 = [0,1,0]$, $p_*F_8 = [-1,1,1]$. The ideals in Table~\ref{table:Ideal_ofConditions_thm:ExceptCollection_MaxLength} are the building blocks of the ideal $\mathcal I_{\rm C}$ introduced in Step~\hyperref[item:PlaneCurveExistence_thm:ExceptCollection_MaxLength]{5(3)}. \[ \begin{array}{c|c|c|c} \text{symbol} & \text{ideal form} & \text{ideal sheaf at the\,...} & \text{divisor on }Y \\ \hline \mathcal I_{E_1} & (x,y-z) & \text{node of }p_*C_1 & -E_1 \\ \mathcal I_{E_2+E_3} & (x,y) & \text{node of }p_*C_2 & -(E_2+E_3) \\ \mathcal I_{E_2+2E_3} & (x^2,y) & \begin{tabular}{c} \footnotesize distinguished tangent \\[-5pt] \footnotesize at the node of $p_*C_2$ \end{tabular} & -(E_2+2E_3) \\ \mathcal J_9 & (h_1,h_2) & \text{nine base points} & - \sum_{i\leq9} F_i \\ \mathcal J_7 & \scriptstyle \mathcal J_9 {\textstyle/} (x+z,y-z)(x,z) & \text{seven base points} & -\sum_{i\leq7} F_i \\ \mathcal J_8 & (x+z,y-z)\mathcal J_7 & \text{eight base points} & -\sum_{i\leq8} F_i \end{array} \]\nopagebreak\vskip-\baselineskip\captionof{table}{The ideals associated with the exceptional divisors}\label{table:Ideal_ofConditions_thm:ExceptCollection_MaxLength}\vskip+0.33\baselineskip Note that the nine base points contain $[0,1,0]$ and $[-1,1,1]$, thus there exists an ideal $\mathcal J_7$ such that $\mathcal J_9 = (x+z,y-z)(x,z) \mathcal J_7$.
3,952
67,457
en
train
0.68.16
\end{enumerate} \item As explained in Remark~\ref{rmk:Configuration_andCohomology}, the value $h^0(D_{ij})$ might depend on the configuration of $p_*C_1$ and $p_*C_2$. However, for general $p_*C_1 = (h_1=0)$, $p_*C_2 = (h_2=0)$, the minimum value of $h^0(D_{ij})$ is attained. This can be observed in the following way. Let $h = \sum_{\alpha} a_\alpha \mathbf{x}^\alpha$ be a homogeneous equation of degree $d$, where the sum is taken over the $3$-tuples $\alpha = (\alpha_x, \alpha_y, \alpha_z)$ with $\alpha_x + \alpha_y + \alpha_z = d$ and $\mathbf{x}^\alpha = x^{\alpha_x} y^{\alpha_y} z^{\alpha_z}$. Then the ideal $\mathcal I_{\rm C}$ imposes linear relations on $\{a_\alpha\}_\alpha$, thus we get a linear system, or equivalently, a matrix $M$, with the variables $\{a_\alpha\}_\alpha$. After perturbing $h_1$ and $h_2$, the rank of $M$ would not decrease since \{$\op{rank} M \geq r_0$\} is an open condition for any fixed $r_0$. From this we conclude: if $h^0(D_{ij}) \leq r$ for at least one pair of $p_*C_1$ and $p_*C_2$, then $h^0(D_{ij}) \leq r$ for general $p_*C_1$ and $p_*C_2$. \item\label{item:M2Configuration_thm:ExceptCollection_MaxLength} Let $h_1 = (y-z)^2z - x^3 - x^2z$ and $h_2 = x^3 - 2xy^2 + 2xyz + y^2z$. These equations define plane nodal cubics such that \begin{enumerate} \item $p_*C_1$ has the node at $[0,1,1]$, and $p_*C_2$ has the node at $[0,0,1]$; \item $p_*C_2$ has two tangent directions ($y=0$ and $y=-2x$) at nodes; \item $p_*C_1 \cap p_*C_2$ contains two $\Q$-rational points, namely $[0,1,0]$ and $[-1,1,1]$. \end{enumerate} We take $y=0$ as the distinguished tangent direction at the node of $p_*C_2$, and take $p_*F_9 = [0,1,0]$, $p_*F_8 = [-1,1,1]$. The ideals in Table~\ref{table:Ideal_ofConditions_thm:ExceptCollection_MaxLength} are the building blocks of the ideal $\mathcal I_{\rm C}$ introduced in Step~\hyperref[item:PlaneCurveExistence_thm:ExceptCollection_MaxLength]{5(3)}. \[ \begin{array}{c|c|c|c} \text{symbol} & \text{ideal form} & \text{ideal sheaf at the\,...} & \text{divisor on }Y \\ \hline \mathcal I_{E_1} & (x,y-z) & \text{node of }p_*C_1 & -E_1 \\ \mathcal I_{E_2+E_3} & (x,y) & \text{node of }p_*C_2 & -(E_2+E_3) \\ \mathcal I_{E_2+2E_3} & (x^2,y) & \begin{tabular}{c} \footnotesize distinguished tangent \\[-5pt] \footnotesize at the node of $p_*C_2$ \end{tabular} & -(E_2+2E_3) \\ \mathcal J_9 & (h_1,h_2) & \text{nine base points} & - \sum_{i\leq9} F_i \\ \mathcal J_7 & \scriptstyle \mathcal J_9 {\textstyle/} (x+z,y-z)(x,z) & \text{seven base points} & -\sum_{i\leq7} F_i \\ \mathcal J_8 & (x+z,y-z)\mathcal J_7 & \text{eight base points} & -\sum_{i\leq8} F_i \end{array} \]\nopagebreak\vskip-\baselineskip\captionof{table}{The ideals associated with the exceptional divisors}\label{table:Ideal_ofConditions_thm:ExceptCollection_MaxLength}\vskip+0.33\baselineskip Note that the nine base points contain $[0,1,0]$ and $[-1,1,1]$, thus there exists an ideal $\mathcal J_7$ such that $\mathcal J_9 = (x+z,y-z)(x,z) \mathcal J_7$. \item We sketch the proof of $h^p_{10,9}=h^p(-G_{10}^\gen + G_9^\gen)=0$, which illustrates several subtleties. Since $h^2_{10,9}=0$ by Step~\ref{item:NumericalStep_thm:ExceptCollection_MaxLength}, we only have to prove $h^0_{10,9}=0$. Thus, we take $D_{10,9}^\gen := -G_{10}^\gen + G_9^\gen$. As in Step~\hyperref[item:Strategy_HumanPart_thm:ExceptCollection_MaxLength]{3(2)}, take $D_{10,9}' = p^*(3H) + 3F_9 - 2C_0 + 2C_1 - E_1 - (C_2+E_2+E_3) + 2(2C_2+E_2)$. We have $(D_{10,9}'.C_2)=-3$, and $(D_{10,9}'-C_2-E_2 \mathbin. F_9) = -1$. Let $D_{10,9}:= D_{10,9}' - C_2-E_2-F_9$. Then, $h^0(D_{10,9}) = h^0(D_{10,9}') \geq h^0(D_{10,9}^\gen)$. As in Step~\hyperref[item:M2Part_thm:ExceptCollection_MaxLength]{5(2)}, the divisor $D_{10,9}$ can be rewritten as \[ D_{10,9} = p^*(9H) - 2 \sum_{i=1}^8 F_i - 5E_1 - 4E_2 - 7E_3. \] Since $\mathcal I_{E_2+E_3}^2$ imposes more conditions than $\mathcal I_{E_2+2E_3}$, the ideal of (minimal) conditions corresponding to $-4E_2 - 7E_3$ is $\mathcal I_{E_2+E_3} \cdot \mathcal I_{E_2+2E_3}^3$. Thus, the plane curve $p_*D_{10,0}$ corresponds to a nonzero section of \[ H^0(\mathcal O_{\P^2}(9) \otimes \mathcal J_8^2 \cdot \mathcal I_{E_1}^5 \cdot \mathcal I_{E_2+E_3} \cdot \mathcal I_{E_2+2E_3}^3 ). \] Using Macaulay2, we find that the rank of this group is zero. This can be found in \texttt{ExcColl\_Dolgachev.m2}\,\cite{ChoLee:Macaulay2}. In a similar way, we obtain the following table (be aware of the difference with (\ref{eq:thm:ExceptCollection_MaxLength})). \begin{equation} \scalebox{0.9}{$ \begin{array}{c|ccccc} & G_0^\gen & G_8^\gen & G_9^\gen & G_{10}^\gen & G_{11}^\gen \\[2pt] \hline G_0^\gen & 1\,0\,0 & 0\,0\,1 & 0\,0\,1 & 0\,0\,3 & 0\,0\,6 \\ G_8^\gen & & 1\,0\,0 & & 0\,0\,2 & 0\,0\,5\\ G_9^\gen & & & 1\,0\,0 & 0\,0\,2 & 0\,0\,5 \\ G_{10}^\gen & & & & 1\,0\,0 & 0\,0\,3 \\ G_{11}^\gen & & & & & 1\,0\,0 \end{array} $} \label{eq:M2Computation_thm:ExceptCollection_MaxLength} \end{equation} Table~\ref{table: Macaualy2 Computations} gives a short summary on the computations done in \texttt{ExcColl\_Dolgachev.m2}\,\cite{ChoLee:Macaulay2}. \[ \scalebox{0.9}{$ \begin{array}{c|c|l} (i,j) & \text{result} & \multicolumn{1}{c}{\text{choice of }D_{ij}} \\ \hline (9,0) & h^0_{9,0}=0 & p^*(5H) - \sum_{i\leq9} F_i - 3E_1 - 2E_2 - 4E_3 \\ (10,0) & h^0_{10,0}=0 & p^*(14H) - 3\sum_{i\leq8} F_i - 8E_1 - 6E_2-11E_3 \\ (10,8) & h^0_{10,8}=0 & p^*(9H) - 2\sum_{i\leq7} F_i - F_8 - 6E_1 - 3E_2 - 6E_3 \\ (10,9) & h^0_{10,9}=0 & p^*(9H) - 2\sum_{i\leq8} F_i - 5E_1 - 4E_2 - 7E_3 \\ (11,0) & h^0_{11,0}=0 & p^*(31H) - 7\sum_{i\leq8} F_i - F_9 - 18E_1 - 11E_2 - 22E_3 \\ (11,8) & h^0_{11,8}=0 & p^*(26H) - 6\sum_{i\leq7} F_i - 5F_8 - F_9 - 14E_1 - 10E_2 - 20E_3 \\ (11,9) & h^0_{11,9}=0 & p^*(26H) - 6\sum_{i\leq8} F_i - 15E_1 - 9E_2 - 18E_3 \\[3pt] \hline (0,10) & h^2_{0,10}=3 & p^*(17H) - 4\sum_{i\leq8} F_i - F_9 - 9E_1 - 6E_2 - 12E_3 \\ (0,11) & h^2_{0,11}=6 & p^*(31H) - 7\sum_{i\leq8} F_i - F_9 - 17E_1 - 12E_2 - 23E_3 \\ (8,11) & h^2_{8,11}=5 & p^*(26H) - 6\sum_{i\leq7} F_i - 5F_8 - F_9 - 15E_1 - 9E_2 - 18E_3 \\ (9,11) & h^2_{9,11}=5 & p^*(26H) - 6\sum_{i\leq8} F_i - 14E_1 - 10E_2 - 19E_3 \end{array} $} \]\nopagebreak\vskip-\baselineskip\captionof{table}{Summary of the Macaulay2 computations}\label{table: Macaualy2 Computations}\vskip+0.33\baselineskip Note that the numbers $h_{11,10}^p$ and $h_{10,11}^p$ are computed freely; indeed, $-G_{11}^\gen + G_{10}^\gen = -G_{10}^\gen$, thus $h^p_{11,10} = h^p_{10,0}$ and $h^p_{10,11}=h^p_{0,10}$. Finally, perturb the cubics $p_*C_1$ and $p_*C_2$ so that (\ref{eq:M2Computation_thm:ExceptCollection_MaxLength}) remains valid and Lemma~\ref{lem:BasePtPermutation} is applicable. Then, (\ref{eq:thm:ExceptCollection_MaxLength}) is verified immediately. \qed
3,620
67,457
en
train
0.68.17
here \end{enumerate} \end{proof}
19
67,457
en
train
0.68.18
\begin{remark}\label{rmk:Configuration_andCohomology} Assume that the nodal curves $p_*C_1$, $p_*C_2$ are in a special position so that the node of $p_*C_1$ is located on the distinguished tangent line at the node of $p_*C_2$. Then, the proper transform $\ell$ of the unique line through the nodes of $p_*C_1$ and $p_*C_2$ has the following divisor expression: \[ \ell = p^*H - E_1 - (E_2 + 2E_3). \] In particular, the divisor $D_{90} = p^*(5H) - \sum_{i\leq9} F_i - 3E_1 - 2(E_2+2E_3)$ is linearly equivalent to $2 \ell + C_1 + E_1$, thus $h^0(D_{90}) > 0$. Consequently, for this particular configuration of $p_*C_1$ and $p_*C_2$, we cannot prove $h^0_{90} = 0$ using upper-semicontinuity. However, the numerical method (Step~\ref{item:Strategy_HumanPart_thm:ExceptCollection_MaxLength} in the proof of the previous theorem) cannot detect such variances originated from the position of nodal cubics, hence it cannot be applied to the proof of $h^0_{90}=0$. \end{remark} The following lemma, used in the end of the proof of Theorem~\ref{thm:ExceptCollection_MaxLength}, illustrates the symmetric nature of $F_1,\ldots,F_8$. \begin{lemma}\label{lem:BasePtPermutation} Assume that $X^\gen$ is originated from a cubic pencil generated by two general plane nodal cubics $p_*C_1$ and $p_*C_2$. Let $D \in \Pic Y$ be a divisor on the rational elliptic surface $Y$. Assume that in the expression of $D$ in terms of $\Z$-basis $\{p^*H, F_1,\ldots, F_9, E_1, E_2, E_3\}$, the coefficients of $F_1,\ldots, F_8$ are the same. Then, $h^p(D+F_i) = h^p(D+F_j)$ for any $p \geq 0$ and $1 \leq i,j \leq 8$. \end{lemma} \begin{proof} Since $\op{Aut}\P^2 = \op{PGL}(3,\C)$ sends arbitrary 4 points\,(of which any three are not colinear) to arbitrary 4 points\,(of which any three are not colinear), we may assume the following. \begin{enumerate} \item\label{item:lem:BasePtPermutation_DistinguishedBasePt} The base point $p_*F_9$ is $\Q$-rational. \item The nodes of $p_*C_1$ and $p_*C_2$ are $\Q$-rational. \item The distinguished tangent direction at the node of $p_*C_2$ is defined over $\Q$. \end{enumerate} Now, let $K$ be the extension field over $\Q$ which is generated by the coefficients of the cubic forms defining $p_*C_1$, $p_*C_2$. Since $p_*C_1$, $p_*C_2$ are general, we may assume the following: \begin{enumerate}[resume] \item\label{item:lem:BasePtPermutation_affineBasePts} The base points $p_*F_1,\ldots,p_*F_9$ are contained in the affine space $(z \neq 0) \subset \P^2_{x,y,z}$. \item\label{item:lem:BasePtPermutation_Resultant} Let $h_i \in K[x,y,z]$ be the defining equation of $p_*C_i$, and let $\op{res}(h_1,h_2;x)$ be the resultant of $h_1(x,y,1)$, $h_2(x,y,1)$ regarded as elements in $(K[x])[y]$. The irreducible factorization of $\op{res}(h_1,h_2;x)$ over $K$ consists of a linear form and an irreducible polynomial, say $H_x$, of degree $8$. The same holds for $\op{res}(h_1,h_2;y)$, {\it i.e.} $\op{res}(h_1,h_2;y) = (y-c) H_y$ for an irreducible polynomial $H_y \in K[y]$ of degree $8$. We assume further that $H_x \neq H_y$ up to multiplication by $K^\times$. \end{enumerate} The last condition has the following interpretation. Let $p_* F_i = [\alpha_i,\beta_i,1] \in \P^2$ for $\alpha_i,\beta_i \in \C$ and $i=1,\ldots,9$. The resultant $\op{res}(h_1,h_2;x) \in K[x]$ is the polynomial having $\{\alpha_i\}_{i=1}^9$ as the solutions. By the conditions \ref{item:lem:BasePtPermutation_DistinguishedBasePt}, $\alpha_9 \in \Q$, so a linear factor must appear in $\op{res}(h_1,h_2;x)$. Hence, \ref{item:lem:BasePtPermutation_Resultant} implies that $\alpha_1,\ldots,\alpha_8$ are Galois conjugate over $K$, which should be true for general $p_*C_1$, $p_*C_2$. The same is assumed to be true for $\beta_1,\ldots,\beta_8$, and the final sentence says that $\{\alpha_1,\ldots,\alpha_8\} \neq \{\beta_1,\ldots,\beta_8\}$. Let $\tau \in \op{Aut}(\C/K)$ be a field automorphism fixing $K$, and mapping $\alpha_i$ to $\alpha_j$\,($1 \leq i,j \leq 8$). Then $\tau$ induces an automorphism of $\P^2$ which fixes $p_*C_1$ and $p_*C_2$. It follows that $[\alpha_j, \tau(\beta_i), 1]$ is one of the eight base points $\{p_*F_i\}_{i=1}^8$. Since $H_x$ and $H_y$ are different up to multiplication by $K^\times$, there is no point of the form $[\alpha_j, \beta_k, 1]$ in the set $\{p_*F_i\}_{i=1}^8$ except when $k=j$. It follows that $\tau(\beta_i) = \beta_j$. Let $\tau_Y \colon Y \to Y$ be the automorphism induced by $\tau$. According to the assumptions \ref{item:lem:BasePtPermutation_DistinguishedBasePt}--\ref{item:lem:BasePtPermutation_Resultant}, it satisfies the following properties: \begin{enumerate}[label=(\arabic{enumi})] \item $\tau_Y$ fixes $F_9, E_1, E_2, E_3$; \item $\tau_Y$ permutes $F_1,\ldots,F_8$; \item $\tau_Y$ maps $F_i$ to $F_j$. \end{enumerate} Furthermore, since the coefficients of $F_1,\ldots,F_8$ are the same in the expression of $D$, $\tau_Y$ fixes $D$. It follows that $\tau_Y^* \colon \Pic Y \to \Pic Y$ maps $D+F_j$ to $D+F_i$. In particular, $H^p(D+F_j) = H^p(\tau_Y^*(D+F_i)) \simeq H^p( D+F_i)$ for any $1 \leq i,j \leq 8$. \qedhere \end{proof}
2,056
67,457
en
train
0.68.19
\subsection{Incompleteness of the collection}\label{subsec:Incompleteness} Let $\mathcal A \subset \D^{\rm b}(X^\gen)$ be the orthogonal subcategory \[ \bigl\langle \mathcal O_{X^\gen}(G_0^\gen),\ \mathcal O_{X^\gen}(G_1^\gen),\ \ldots,\ \mathcal O_{X^\gen}(G_{11}^\gen) \bigr\rangle^\perp, \] so that there exists a semiorthogonal decomposition \[ \D^{\rm b}(X^\gen) = \bigl\langle \mathcal A,\ \mathcal O_{X^\gen}(G_0^\gen),\ \mathcal O_{X^\gen}(G_1^\gen),\ \ldots,\ \mathcal O_{X^\gen}(G_{11}^\gen) \bigr\rangle. \] We will prove that $K_0(\mathcal A) = 0$, $\op{HH}_\bullet(\mathcal A) = 0$, but $\mathcal A\not\simeq 0$. Such a category is called a \emph{phantom} category. To give a proof, we claim that the \emph{pseudoheight} of the collection~(\ref{eq:ExcColl_MaxLength}) is at least $2$. Once we achieve the claim, \cite[Corollary~4.6]{Kuznetsov:Height} implies that $\op{HH}^0(\mathcal A) \simeq \op{HH}^0(X^\gen) = \C$, thus $\mathcal A\not\simeq 0$. \begin{definition}\ \begin{enumerate} \item Let $E_1,E_2$ be objects in $\D^{\rm b}(X^\gen)$. The \emph{relative height} $e(E_1,E_2)$ is the minimum of the set \[ \{ p : \Hom(E_1,E_2[p]) \neq 0 \} \cup \{ \infty \}. \] \item Let $\langle F_0,\ldots,F_m\rangle$ be an exceptional collection in $\D^{\rm b}(X^\gen)$. The \emph{anticanonical pseudoheight} is defined by \[ \op{ph}_{\rm ac}(F_0,\ldots,F_m) = \min \Bigl ( \sum_{i=1}^p e(F_{a_{i-1}}, F_{a_i}) + e(F_{a_p} , F_{a_0} \otimes \mathcal O_{X^\gen}(-K_{X^\gen})) - p \Bigr), \] where the minimum is taken over all possible tuples $0 \leq a_0 < \ldots < a_p \leq m$. \end{enumerate} \end{definition} The pseudoheight is given by the formula $\op{ph}(F_0,\ldots,F_m) = \op{ph}_{\rm ac}(F_0,\ldots,F_m) + \dim X^\gen$, thus it suffices to prove that $\op{ph}_{\rm ac}(G_0^\gen,\ldots,G_{11}^\gen) \geq 0$. \begin{corollary}\label{cor:Phantom} In the semiorthogonal decomposition \[ \D^{\rm b}(X^\gen) = \bigl \langle \mathcal A,\ \mathcal O_{X^\gen}(G_0^\gen),\ \ldots,\ \mathcal O_{X^\gen}(G_{11}^\gen)\bigr\rangle, \] we have $K_0(\mathcal A) = 0$ and $\op{HH}_\bullet(\mathcal A)=0$. Also, $\op{ph}_{\rm ac}(G_0^\gen,\ldots,G_{11}^\gen) = 2$, thus the restriction map $\op{HH}^p(X^\gen) \to \op{HH}^p(\mathcal A)$ is an isomorphism for $p \leq 2$ and is a monomorphism for $p=3$. In particular, $\op{HH}^0(\mathcal A) \simeq \C$. \end{corollary} \begin{proof} Since $\kappa(X^\gen) = 1$, the Bloch conjecture holds for $X^\gen$\,\cite[\textsection11.1.3]{Voisin:HodgeTheory2}. Thus the Grothendieck group $K_0(X^\gen)$ is a free abelian group of rank $12$\,(see for {\it e.g.} \cite[Lemma~2.7]{GalkinShinder:Beauville}). Furthermore, Hochschild-Kostant-Rosenberg isomorphism for Hochschild homology says \[ \op{HH}_k(X^\gen) \simeq \bigoplus_{q-p=k} H^{p,q}(X^\gen), \] hence, $\op{HH}_\bullet(X^\gen) \simeq \C^{\oplus 12}$. It is well-known that $K_0$ and $\op{HH}_\bullet$ are the additive invariants with respect to semiorthogonal decompositions, thus $K_0(X^\gen) \simeq K_0(\mathcal A) \oplus K_0({}^\perp\mathcal A)$, and $\op{HH}_\bullet(X^\gen) = \op{HH}_\bullet(\mathcal A) \oplus \op{HH}_\bullet({}^\perp \mathcal A)$.\footnote{By definition of $\mathcal A$, ${}^\perp \mathcal A$ is the smallest full triangulated subcategory containing the collection (\ref{eq:ExcColl_MaxLength}) in Theorem~\ref{thm:ExceptCollection_MaxLength}.} If $E$ is an exceptional vector bundle, then $\D^{\rm b}(\langle E\rangle ) \simeq \D^{\rm b}(\Spec \C)$ as $\C$-linear triangulated categories, thus $K_0({}^\perp\mathcal A) \simeq \Z^{\oplus 12}$ and $\op{HH}_\bullet({}^\perp\mathcal A)\simeq \C^{\oplus12}$. It follows that $K_0(\mathcal A) = 0$ and $\op{HH}_\bullet(\mathcal A)=0$. Assume the chain $0\leq a_0 < \ldots < a_p \leq 11$ has length $p=0$. Then, $e(G_{a_0}^\gen,G_{a_0}^\gen-K_{X^\gen}) = 2$ since $\dim \Ext_{X^\gen}^p(G_i^\gen, G_i^\gen - K_{X^\gen}) = h^p(-K_{X^\gen}) = 1$ for $p=2$ and $0$ otherwise. For any $0 \leq j < i \leq 11$, \[ e(G_j^\gen, G_i^\gen) =\left\{ \begin{array}{ll} \infty & \text{if } 1 \leq j < i \leq 9 \\ 2 & \text{otherwise} \end{array} \right. \] by Theorem~\ref{thm:ExceptCollection_MaxLength}. Also, it is easy to see that $H^0(G_i^\gen, G_j^\gen - K_{X^\gen}) = 0$ for $i > j$, thus for any chain $0 \leq a_0 < \ldots< a_p \leq 11$, \[ e(G_{a_0}^\gen, G_{a_1}^\gen) + \ldots + e(G_{a_{p-1}}^\gen, G_{a_p}^\gen) + e(G_{a_p}^\gen, G_{a_0}^\gen - K_{X^\gen}) - p \geq 2p + 1 - p, \] which shows that the value of the left hand side is at least $2$ for any chain of length${}>0$. It follows that $\op{ph}_{\rm ac}(G_0^\gen,\ldots,G_{11}^\gen) =2$. The statements about $\op{HH}^\bullet$ immediately follows by \cite[Corollary~4.6]{Kuznetsov:Height}. \qedhere \end{proof}
2,049
67,457
en
train
0.68.20
\subsection{Cohomology computations} We present Dictionary~\ref{dictionary:H0Computations} of cohomology computations that appeared in the proof of Theorem~\ref{thm:ExceptCollection_MaxLength}. It needs the divisors illustrated in Figure~\ref{fig:Configuration_Basic}, together with one more curve, which did not appear in Figure~\ref{fig:Configuration_Basic}. Let $\ell$ be the proper transform of the unique line in $\P^2$ passing through the nodes of $p_* C_1$ and $p_* C_2$. In the divisor form, \[ \ell = p^*H - E_1 - (E_2 + E_3). \] Due to the divisor forms \[ \begin{array}{r@{}l} C_1 &{}=p^*(3H) - 2E_1 - \sum_{i=1}^9F_i, \\ C_2 &{}=p^*(3H) - (2E_2 + 3E_3) - \sum_{i=1}^9 F_i,\ \text{and} \\ C_0 &{}=p^*(3H) - \sum_{i=1}^9 F_i, \end{array} \] it is straightforward to write down the intersections involving $\ell$: \[ \begin{array}{ c | c | c | c | c | c | c | c | c | c } & p^*H & F_i & C_0 & C_1 & E_1 & C_2 & E_2 & E_3 & \ell \\ \hline \ell & 1 & 0 & 3 & 1 & 1 & 1 & 1 & 0 & -1 \end{array} \] \begin{dictionary}\label{dictionary:H0Computations} For each of the following Cartier divisors on $Y$, we give upper bounds of $h^0$. The main strategy is the following. We take smooth rational curves $A_1,\ldots, A_r$, and consider the exact sequence \[ 0 \to H^0(D - S_i) \to H^0(D - S_{i-1}) \to H^0(\mathcal O_{A_i}( D - S_{i-1}) ), \] where $S_i = \sum_{j\leq i} A_j$. This gives the inequality $h^0(D - S_{i-1}) \leq h^0(D - S_i) + h^0( (D - S_{i-1})\big\vert_{A_i})$. Inductively, we get \begin{equation} h^0(D) \leq h^0(D - S_r) + \sum_{i=1}^{r-1} h^0( ( D - S_i)\big\vert_{A_{i+1}}). \label{eq:dictionary:UpperBound} \end{equation} In what follows, we take $A_1,\ldots,A_r$ carefully so that $h^0(D- S_r)=0$, and that the values $h^0(D - S_{i-1}\big\vert_{A_i})$ are as small as possible. In each item in the dictionary, we first present the target divisor $D$ and the bound of $h^0(D)$. After then, we give a list of smooth rational curves in the following format: \[ A_1,\ A_2,\ \ldots,\ A_i\textsuperscript{(\checkmark)},\ \ldots\ , A_r. \] The symbol $(\checkmark)$ indicates the situation when $(D - S_{i-1} \mathbin. A_i) = 0$, the case in which the right hand side of (\ref{eq:dictionary:UpperBound}) increases by $1$. The curves without symbols indicate the situations in which $(D - S_{i-1} \mathbin . A_i) < 0$, so that $A_i$ does not contribute to the bound of $h^0(D)$. We conclude by showing that $D - S_r$ is not an effective divisor. The upper bound of $h^0(D)$ will be given by the number of $(\checkmark)$'s in the list. Since all of these calculations are routine, we omit the details. From now on, $i$ is any number between $1,2,\ldots,8$. \begin{enumerate}[label=\normalfont(\arabic{enumi}), itemsep=7pt plus 5pt minus 0pt] \item\label{dictionary:-G_i} $D=p^*(2H) + F_9 - F_i - 2C_0 + C_1 + C_2 + E_2 + E_3$ $h^0(D)=0$ \\ The following is the list of curves $A_1,\ldots,A_r$\,(the order is important): $F_9,\, \ell,\, E_2,\, \ell$. The resulting divisor is \[ D - A_1 - \ldots - A_r = p^*(2H) - F_i - 2C_0 + C_1 + C_2 + E_3 - 2\ell. \] Since $\ell = p^*H - E_1 - (E_2 + E_3)$ and $C_0 = C_1 + 2E_1 = C_2 + 2E_2 + 3E_3$, $D - A_1 - \ldots - A_r = -F_i$. It follows that $H^0(D) \simeq H^0( - F_i) = 0$. \item\label{dictionary:Nonvanish_K-G_i} $D = p^*(2H) + F_9 - F_i - C_0 + C_1 -E_1 + 2C_2 + E_2.$ $h^0(D) \leq 1$ \\ Rule out $C_2,\,E_2,\,\ell\textsuperscript{(\checkmark)},\,C_1,\,F_9,\,C_2,\,\ell,\,E_1$. The resulting divisor is $p^*(2H) - F_i - C_0 - 2E_1 - 2\ell = -F_i - C_2 - E_3$. Since there is only one checkmark, $h^0(D) \leq h^0(-F_i - C_2 - E_3) + 1 = 1$. \item\label{dictionary:Nonvanish_K-G_9} $D = p^*(2H) - 2C_0 + C_1 + C_2 + E_2 + E_3$ $h^0(D) \leq 1$ \\ Rule out $\ell,\, E_2,\, \ell,\, C_2\textsuperscript{(\checkmark)}$. The remaining part is $ p^*(2H) - 2C_0 + C_1 + E_3 - 2\ell = - C_2$, thus $h^0(D) \leq 1$. \item\label{dictionary:Nonvanish_K+G_i-G_10} $D = p^*(3H) + 2F_9 + F_i - 2C_0 + 2C_1 - E_1 + 3C_2 + E_2 - E_3$ $h^0(D) \leq 2$ \\ The following is the list of divisors that we have to remove: \[ C_2,\ E_2,\ \ell\textsuperscript{(\checkmark)},\ E_2,\ F_9\textsuperscript{(\checkmark)},\ C_2,\ E_2,\ \ell,\ C_1,\ F_9,\ F_i,\ \ell. \] The remaining part is $p^*(3H) - 2C_0 + C_1 - E_1 + C_2 - E_2 - E_3 - 3\ell = -E_3$, thus $h^0(D) \leq 2$. \item\label{dictionary:Nonvanish_K+G_9-G_10} $D = p^*(3H) + 3F_9 - 3C_0 + 3C_1 + 3C_2 + 2E_2 + E_3$ $h^0(D) \leq 2$ \\ Rule out the following curves: \[ F_9\textsuperscript{(\checkmark)},\ C_1,\ C_2,\ E_2,\ F_9,\ \ell,\ E_2\textsuperscript{(\checkmark)},\ \ell,\ C_2,\ \ell,\ E_2,\ E_3,\ F_9,\ C_1,\ E_1. \] The remaining part is $p^*(3H) - 3C_0 + C_1 - E_1 + C_2 - E_2 -3\ell = -C_0$, thus $h^0(D) \leq 2$. \end{enumerate} \end{dictionary} \section{Appendix}\label{sec:Appendix} \subsection{A brief review on Hacking's construction.}\label{subsec:HackingConstruction} Let $n>a>0$ be coprime integers, let $X$ be a projective normal surface with quotient singularities, and let $(P \in X)$ be a $T_1$-singularity of type $(0 \in \A^2 / \frac{1}{n^2}(1,na-1))$. Suppose there exists a one parameter deformation $\mathcal X / ( 0 \in T)$ of $X$ over a smooth curve germ $(0 \in T)$ such that $(P \in \mathcal X) / (0 \in T)$ is a $\Q$-Gorenstein smoothing of $(P \in X)$. \begin{proposition}[{\cite[\textsection3]{Hacking:ExceptionalVectorBundle}}]\label{prop:HackingWtdBlup} Take the base extension $(0 \in T') \to (0 \in T)$ of ramification index $a$, and let $\mathcal X'$ be the pull back along the extension. Then, there exists a proper birational morphism $\Phi \colon \tilde{\mathcal X} \to \mathcal X'$ satisfying the following properties. \begin{enumerate} \item The exceptional fiber $W = \Phi^{-1}(P)$ is isomorphic to the projective normal surface \[ (xy = z^n + t^a) \subset \P_{x,y,z,t}(1,na-1,a,n). \] \item The morphism $\Phi$ is an isomorphism outside $W$. \item\label{item:prop:HackingWtdBlup} The central fiber $\tilde{\mathcal X}_0 = \Phi^{-1}(\mathcal X'_0)$ is reduced and has two irreducible components: $\tilde X_0$ the proper transform of $X$, and $W$. The intersection $Z:=\tilde X_0 \cap W$ is a smooth rational curve given by $(t=0)$ in $W$. Furthermore, the surface $\tilde X_0$ can be obtained in the following way: take a minimal resolution $Y \to X$ of $(P \in X)$, and let $E_1,\ldots,E_r$ be the chain of exceptional curves arranged in such a way that $(E_i . E_{i+1})=1$ and $(E_r^2) = -2$. Then the contraction of $E_2,\ldots,E_r$ defines $\tilde X_0$. Clearly, $E_1$ maps isomorphically onto $Z$ along the contraction $Y \to \tilde X_0$. \end{enumerate} \end{proposition} \begin{proposition}[{\cite[Proposition~5.1]{Hacking:ExceptionalVectorBundle}}]\label{prop:Hacking_BundleG} There exists an exceptional vector bundle $G$ of rank $n$ on $W$ such that $G \big\vert_{Z} \simeq \mathcal O_Z(1)^{\oplus n}$. \end{proposition} \begin{remark}\label{rmk:SimplestSingularCase} Note that in the decomposition $\tilde{\mathcal X}_0 = \tilde X_0 \cup W$, the surface $W$ is completely determined by the type of singularity $(P \in X)$, whereas $\tilde X_0$ reflects the global geometry of $X$. In some circumstances, $W$ and $G$ have explicit descriptions. \begin{enumerate} \item Suppose $a=1$. In $\P_{x,y,z,t}(1,n-1,1,n)$, we have $W_2 =( xy = z^n + t)$ and $Z_2 = (xy=z^n, t=0)$ by Proposition~\ref{prop:HackingWtdBlup}. The projection map $\P_{x,y,z,t}(1,n-1,1,n) \dashrightarrow \P_{x,y,z}(1,n-1,1)$ sends $W_2$ isomorphically onto $\P_{x,y,z}$, thus we get \[ W_2 \simeq \P_{x,y,z}(1,n-1,1),\quad\text{and}\quad Z_2 \simeq (xy=z^n) \subset \P_{x,y,z}(1,n-1,1). \] \item Suppose $(n,a) = (2,1)$, then it can be shown (by following the proof of Proposition~\ref{prop:Hacking_BundleG}) that $W = \P_{x,y,z}^2$, $G = \mathcal T_{\P^2}(-1)$ where $\mathcal T_{\P^2} = (\Omega_{\P^2}^1)^\vee$ is the tangent sheaf of the plane. Moreover, the smooth rational curve $Z = \tilde X_0 \cap W$ is embedded as a smooth conic $(xy = z^2)$ in $W$. \end{enumerate} \end{remark} The final proposition presents how to obtain an exceptional vector bundle on the general fiber of the smoothing. \begin{proposition}[{\cite[\textsection4]{Hacking:ExceptionalVectorBundle}}]\label{prop:HackingDeformingBundles} Let $X^\gen$ be the general fiber of the deformation $\mathcal X / (0 \in T)$, and assume $H^2(\mathcal O_{X^\gen}) = H^1(X^\gen,\Z) = 0$.\footnote{Since quotient singularities are Du Bois, we have $H^1(\mathcal O_X) = H^2(\mathcal O_X) = 0$. ({\it cf.} \cite[Lem.~4.1]{Hacking:ExceptionalVectorBundle})} Let $G$ be the exceptional vector bundle on $W$ in Proposition~\ref{prop:Hacking_BundleG}. Suppose there exists a Weil divisor $D \in \Cl X$ such that $D$ does not pass through the singular points of $X$ except $P$, the proper transform $D' \subset \tilde X_0$ of $X$ satisfies $(D'. Z) = 1$, and $\op{Supp} D' \subset \tilde X_0 \setminus \op{Sing} \tilde X_0$. Then the vector bundles $\mathcal O_{\tilde X_0}(D')^{\oplus n}$ and $G$ glue along $\mathcal O_Z(1)^{\oplus n}$ to produce an exceptional vector bundle $\tilde E$ on $\tilde{\mathcal X}_0$. Furthermore, the vector bundle $\tilde E$ deforms uniquely to an exceptional vector bundle $\tilde{\mathcal E}$ on $\tilde{\mathcal X}$. Restriction $\tilde{\mathcal E}\big\vert_{X^\gen}$ to the general fiber is an exceptional vector bundle on $X^\gen$ of rank $n$. \end{proposition}
4,027
67,457
en
train
0.68.21
\begin{proposition}[{\cite[\textsection3]{Hacking:ExceptionalVectorBundle}}]\label{prop:HackingWtdBlup} Take the base extension $(0 \in T') \to (0 \in T)$ of ramification index $a$, and let $\mathcal X'$ be the pull back along the extension. Then, there exists a proper birational morphism $\Phi \colon \tilde{\mathcal X} \to \mathcal X'$ satisfying the following properties. \begin{enumerate} \item The exceptional fiber $W = \Phi^{-1}(P)$ is isomorphic to the projective normal surface \[ (xy = z^n + t^a) \subset \P_{x,y,z,t}(1,na-1,a,n). \] \item The morphism $\Phi$ is an isomorphism outside $W$. \item\label{item:prop:HackingWtdBlup} The central fiber $\tilde{\mathcal X}_0 = \Phi^{-1}(\mathcal X'_0)$ is reduced and has two irreducible components: $\tilde X_0$ the proper transform of $X$, and $W$. The intersection $Z:=\tilde X_0 \cap W$ is a smooth rational curve given by $(t=0)$ in $W$. Furthermore, the surface $\tilde X_0$ can be obtained in the following way: take a minimal resolution $Y \to X$ of $(P \in X)$, and let $E_1,\ldots,E_r$ be the chain of exceptional curves arranged in such a way that $(E_i . E_{i+1})=1$ and $(E_r^2) = -2$. Then the contraction of $E_2,\ldots,E_r$ defines $\tilde X_0$. Clearly, $E_1$ maps isomorphically onto $Z$ along the contraction $Y \to \tilde X_0$. \end{enumerate} \end{proposition} \begin{proposition}[{\cite[Proposition~5.1]{Hacking:ExceptionalVectorBundle}}]\label{prop:Hacking_BundleG} There exists an exceptional vector bundle $G$ of rank $n$ on $W$ such that $G \big\vert_{Z} \simeq \mathcal O_Z(1)^{\oplus n}$. \end{proposition} \begin{remark}\label{rmk:SimplestSingularCase} Note that in the decomposition $\tilde{\mathcal X}_0 = \tilde X_0 \cup W$, the surface $W$ is completely determined by the type of singularity $(P \in X)$, whereas $\tilde X_0$ reflects the global geometry of $X$. In some circumstances, $W$ and $G$ have explicit descriptions. \begin{enumerate} \item Suppose $a=1$. In $\P_{x,y,z,t}(1,n-1,1,n)$, we have $W_2 =( xy = z^n + t)$ and $Z_2 = (xy=z^n, t=0)$ by Proposition~\ref{prop:HackingWtdBlup}. The projection map $\P_{x,y,z,t}(1,n-1,1,n) \dashrightarrow \P_{x,y,z}(1,n-1,1)$ sends $W_2$ isomorphically onto $\P_{x,y,z}$, thus we get \[ W_2 \simeq \P_{x,y,z}(1,n-1,1),\quad\text{and}\quad Z_2 \simeq (xy=z^n) \subset \P_{x,y,z}(1,n-1,1). \] \item Suppose $(n,a) = (2,1)$, then it can be shown (by following the proof of Proposition~\ref{prop:Hacking_BundleG}) that $W = \P_{x,y,z}^2$, $G = \mathcal T_{\P^2}(-1)$ where $\mathcal T_{\P^2} = (\Omega_{\P^2}^1)^\vee$ is the tangent sheaf of the plane. Moreover, the smooth rational curve $Z = \tilde X_0 \cap W$ is embedded as a smooth conic $(xy = z^2)$ in $W$. \end{enumerate} \end{remark} The final proposition presents how to obtain an exceptional vector bundle on the general fiber of the smoothing. \begin{proposition}[{\cite[\textsection4]{Hacking:ExceptionalVectorBundle}}]\label{prop:HackingDeformingBundles} Let $X^\gen$ be the general fiber of the deformation $\mathcal X / (0 \in T)$, and assume $H^2(\mathcal O_{X^\gen}) = H^1(X^\gen,\Z) = 0$.\footnote{Since quotient singularities are Du Bois, we have $H^1(\mathcal O_X) = H^2(\mathcal O_X) = 0$. ({\it cf.} \cite[Lem.~4.1]{Hacking:ExceptionalVectorBundle})} Let $G$ be the exceptional vector bundle on $W$ in Proposition~\ref{prop:Hacking_BundleG}. Suppose there exists a Weil divisor $D \in \Cl X$ such that $D$ does not pass through the singular points of $X$ except $P$, the proper transform $D' \subset \tilde X_0$ of $X$ satisfies $(D'. Z) = 1$, and $\op{Supp} D' \subset \tilde X_0 \setminus \op{Sing} \tilde X_0$. Then the vector bundles $\mathcal O_{\tilde X_0}(D')^{\oplus n}$ and $G$ glue along $\mathcal O_Z(1)^{\oplus n}$ to produce an exceptional vector bundle $\tilde E$ on $\tilde{\mathcal X}_0$. Furthermore, the vector bundle $\tilde E$ deforms uniquely to an exceptional vector bundle $\tilde{\mathcal E}$ on $\tilde{\mathcal X}$. Restriction $\tilde{\mathcal E}\big\vert_{X^\gen}$ to the general fiber is an exceptional vector bundle on $X^\gen$ of rank $n$. \end{proposition} \footnotesize \noindent{\bf Acknowledgments.} The first author thanks to Kyoung-Seog Lee for helpful comments on derived categories. He also thanks to Alexander Kuznetsov and Pawel Sosna for giving explanation on the technique of height used in Section \ref{subsec:Incompleteness}. The second author thanks to Fabrizio Catanese and Ilya Karzhemanov for useful remarks. The authors would like to appreciate many valuable comments from the anonymous referee. This work is supported by Global Ph.D. Fellowship Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(No.2013H1A2A1033339)\,(to Y.C.), and is partially supported by the NRF of Korea funded by the Korean government(MSIP)(No.2013006431)\,(to Y.L.). \end{document}
1,706
67,457
en
train
0.69.0
\begin{document} \centerline{\bf\large Riemann Spaces and Pfaff Differential Forms} \vskip .4in \centerline{\textbf{Nikos D. Bagis}} \centerline{\textbf{Aristotele University of Thessaloniki-AUTH-Greece}} \centerline{\textbf{[email protected]}} \[ \] \centerline{\bf Abstract} In this work we study differential geometry in $N$ dimensional Riemann curved spaces using Pfaff derivatives. Avoiding the classical partial derivative the Pfaff derivatives are constructed in a more sophisticated way and make evaluations become easier. In this way Christofell symbols $\Gamma_{ikj}$ of classical Riemann geometry as also the elements of the metric tensor $g_{ij}$ are replaced with one symbol (the $q_{ikj}$). Actually to describe the space we need no usage of the metric tensor $g_{ij}$ at all. We also don't use Einstein's notation and this quite simplifies things. For example we don't have to use upper and lower indexes, which in eyes of a beginner, is quite messy. Also we don't use the concept of tensor. All quantities of the surface or curve or space which form a tensor field are called invariants or curvatures of the space. Several new ideas are developed in this basis.\\ \\ \textbf{Keywords:} Riemann geometry; Curved spaces; Pfaff derivatives; Differential Operators; Invariant theory \[ \]
356
41,782
en
train
0.69.1
\section{Introduction and Development of the Theory} Here we assume a $N$ dimensional space $\bf{S}$. The space $\bf{S}$ will be described by the vector \begin{equation} \overline{x}=\sum^{N}_{i=1}x_i(u_1,u_2,\ldots,u_N)\overline{\epsilon}_i , \end{equation} where $\overline{\epsilon}_i$ is usual orthonormal base of $\textbf{E}=\textbf{R}^{N}$. We assume that in every point of the space $\bf{S}$ correspond $N$ orthonormal vectors $\{\overline{e}_1,\overline{e}_2,\ldots,\overline{e}_{N}\}$. These $N$ vectors $\{\overline{e}_1,\overline{e}_2,\ldots,\overline{e}_{N}\}$ span the space $\bf{S}$. We will use Pfaff derivatives to write our equations. We also study some properties of $\bf{S}$ which will need us for the construction of these equations. The Pfaff derivatives related with the structure of the space $\bf{S}$ which produces differential forms $\omega_k$, $k=1,2,\ldots,N$. These are defined as below.\\ It holds that \begin{equation} \partial_j\overline{x}=\sum^{N}_{i=1}\partial_jx_i\overline{\epsilon}_i. \end{equation} The linear element of $\bf{S}$ is \begin{equation} (ds)^2=(d\overline{x})^2=\sum^{N}_{i,j=1}\left\langle \partial_i\overline{x},\partial_j\overline{x}\right\rangle du_idu_j=\sum^{N}_{i=1}g_{ii}du_i^2+2\sum_{i<j}g_{ij}du_idu_j. \end{equation} Hence \begin{equation} g_{ij}=\left\langle \frac{\partial \overline{x}}{\partial u_i},\frac{\partial \overline{x}}{\partial u_j} \right\rangle, \end{equation} are the structure functions of the first linear form.\\ The Pfaff differential forms $\omega_k$ are defined with the help of $\{\overline{e}_k\}$, $k=1,2,\ldots,N$ as \begin{equation} d\overline{x}=\sum^{N}_{k=1}\omega_k\overline{e}_k. \end{equation} Then the Pfaff derivatives of the function $f$ are $\nabla_kf$, $k=1,2,\ldots,N$ and holds \begin{equation} df=\sum^{N}_{k=1}(\partial_kf)du_k=\sum^{N}_{k=1}(\nabla_kf)\omega_k , \end{equation} From (6) we get \begin{equation} d\overline{x}=\sum^{N}_{k=1}(\partial_k\overline{x})du_k. \end{equation} Also \begin{equation} d\overline{x}=\sum^{N}_{k=1}\omega_k\overline{e}_k\Rightarrow \nabla_m(\overline{x})=\overline{e}_m. \end{equation} Derivating the vectors $\overline{e}_l$ we can write them as linear composition of their self, since they form a basis of $\bf{E}$: \begin{equation} d\overline{e}_i=\sum^{N}_{k=1}\omega_{ik}\overline{e}_k. \end{equation} Then we define the connections $q_{ijm}$ and $b_{kl}$ as \begin{equation} \omega_{ij}=\sum^{N}_{m=1}q_{ijm}\omega_m\textrm{, }\omega_k=\sum^{N}_{l=1}b_{kl}du_{l}. \end{equation} Hence from (10),(5),(7) and $\left\langle \overline{e}_k ,d\overline{x}\right\rangle=\omega_k$, we get that \begin{equation} \left\langle \overline{e}_k, \partial_l\overline{x}\right\rangle=b_{kl}\textrm{ and }\partial_{l}\overline{x}=\sum^{N}_{k=1}b_{kl}\overline{e}_k \end{equation} and from (4) \begin{equation} g_{ij}=\sum^{N}_{k=1}b_{ki}b_{kj}. \end{equation} Also \begin{equation} b_{sl}=\sum^{N}_{m=1}\frac{\partial x_m}{\partial u_l}\cos\left(\phi_{ms}\right)\textrm{, where }\cos\left(\phi_{ms}\right)=\left\langle \overline{\epsilon}_{m},\overline{e}_{s}\right\rangle \end{equation} and $\phi_{ms}$ is the angle formed by $\overline{\epsilon}_m$ and $\overline{e}_s$. Also it holds \begin{equation} \omega_{ij}=\sum^{N}_{l=1}\left(\sum^{N}_{m=1}q_{ijm}b_{ml}\right)du_l \end{equation} By this way we get the Christofell symbols \begin{equation} \Gamma_{ijl}=\sum^{N}_{m=1}q_{ijm}b_{ml}. \end{equation} Thus in view of (19) below \begin{equation} \omega_{ij}=\sum^{N}_{l=1}\Gamma_{ijl}du_l\textrm{, }\Gamma_{ijl}+\Gamma_{jil}=0 \end{equation} and easy (see and Proposition 1 below) \begin{equation} \partial_k\overline{e}_m=\sum^{N}_{j=1}\Gamma_{mjk}\overline{e}_j\textrm{ and }\nabla_k\overline{e}_m=\sum^{N}_{j=1}q_{mjk}\overline{e}_j. \end{equation} From the orthonormality of $\overline{e}_k$ we have \begin{equation} \delta_{ij}=\left \langle \overline{e}_i,\overline{e}_j\right \rangle. \end{equation} Derivating the above relation, we get \begin{equation} \omega_{ij}+\omega_{ji}=0 \end{equation} and hence \begin{equation} q_{ijm}+q_{jim}=0. \end{equation} \\ \textbf{Theorem 1.}\\ The structure equations of $\bf{S}$ are (19) and \begin{equation} d\omega_{j}=\sum^{N}_{m=1}\omega_{m}\wedge\omega_{mj}\textrm{ , }d\omega_{ij}=\sum^{N}_{m=1}\omega_{im}\wedge\omega_{mj}. \end{equation} \\ \textbf{Proof.}\\ We have $$ d\left(d\overline{x}\right)=\overline{0}\Rightarrow d\left(\sum^{N}_{i=1}\overline{e}_i\omega_i\right)=\overline{0}\Rightarrow \sum^{N}_{i=1}\left(d\overline{e}_i\wedge\omega_i+\overline{e}_i d\omega_i\right)=\overline{0}. $$ Hence $$ \sum^{N}_{i=1}d\omega_i\overline{e}_i+\sum^{N}_{i,k=1}\omega_{ik}\wedge\omega_i\overline{e}_k=\overline{0}\Rightarrow d\omega_i=\sum^{N}_{l=1}\omega_{l}\wedge\omega_{li}. $$ The same arguments hold and for the second relation of (21). $qed$\\ \\ \textbf{Definition 1.}\\ We write \begin{equation} rot_{ij}\left(A_{k\ldots i\ldots j\ldots l}\right)=A_{k\ldots i\ldots j\ldots l}-A_{k\ldots j\ldots i\ldots l} \end{equation} \\ \textbf{Definition 2.}\\ We define the Kronecker$-\delta$ symbol as follows:\\ If $\{i_1,i_2,\ldots,i_M\}$, $\{j_1,j_2,\ldots,j_M\}$ are two set of indexes, then $$ \delta_{{i_1i_2\ldots i_M},{j_1j_2\ldots j_M}}=1, $$ if $\{i_1,i_2,\ldots,i_M\}$ is even permutation of $\{j_1,j_2,\ldots,j_M\}$. $$ \delta_{{i_1i_2\ldots i_M},{j_1j_2\ldots j_M}}=-1, $$ if $\{i_1,i_2,\ldots,i_M\}$ is odd permutation of $\{j_1,j_2,\ldots j_{M}\}$ and $$ \delta_{{i_1i_2\ldots i_M},{j_1j_2\ldots j_M}}=0 $$ in any other case.\\ \\ Let $a=\sum^{N}_{i=1}a_i\omega_i$ and $b=\sum^{N}_{j,k=1}b_{jk}\omega_j\wedge\omega_k$, then $$ (a\wedge b)_{123}=\frac{1}{1!}\frac{1}{2!}\sum^{N}_{i,j,k=1}\delta_{{ijk},{123}}a_jb_{jk}= $$ $$ =\frac{1}{2}[a_1b_{23}\delta_{{123},{123}}+a_1b_{32}\delta_{{132},{123}}+a_2b_{13}\delta_{{213},{123}}+a_2b_{31}\delta_{{231},{123}}+ $$ $$ +a_3b_{12}\delta_{{312},{123}}+a_3b_{21}\delta_{{321},{123}}]= $$ $$ =\frac{1}{2}\left(a_1b_{23}-a_1b_{32}-a_2b_{13}+a_2b_{31}+a_3b_{12}-a_3b_{21}\right). $$ We remark here that we don't use Einstein's index notation.\\ \\
2,551
41,782
en
train
0.69.2
From the orthonormality of $\overline{e}_k$ we have \begin{equation} \delta_{ij}=\left \langle \overline{e}_i,\overline{e}_j\right \rangle. \end{equation} Derivating the above relation, we get \begin{equation} \omega_{ij}+\omega_{ji}=0 \end{equation} and hence \begin{equation} q_{ijm}+q_{jim}=0. \end{equation} \\ \textbf{Theorem 1.}\\ The structure equations of $\bf{S}$ are (19) and \begin{equation} d\omega_{j}=\sum^{N}_{m=1}\omega_{m}\wedge\omega_{mj}\textrm{ , }d\omega_{ij}=\sum^{N}_{m=1}\omega_{im}\wedge\omega_{mj}. \end{equation} \\ \textbf{Proof.}\\ We have $$ d\left(d\overline{x}\right)=\overline{0}\Rightarrow d\left(\sum^{N}_{i=1}\overline{e}_i\omega_i\right)=\overline{0}\Rightarrow \sum^{N}_{i=1}\left(d\overline{e}_i\wedge\omega_i+\overline{e}_i d\omega_i\right)=\overline{0}. $$ Hence $$ \sum^{N}_{i=1}d\omega_i\overline{e}_i+\sum^{N}_{i,k=1}\omega_{ik}\wedge\omega_i\overline{e}_k=\overline{0}\Rightarrow d\omega_i=\sum^{N}_{l=1}\omega_{l}\wedge\omega_{li}. $$ The same arguments hold and for the second relation of (21). $qed$\\ \\ \textbf{Definition 1.}\\ We write \begin{equation} rot_{ij}\left(A_{k\ldots i\ldots j\ldots l}\right)=A_{k\ldots i\ldots j\ldots l}-A_{k\ldots j\ldots i\ldots l} \end{equation} \\ \textbf{Definition 2.}\\ We define the Kronecker$-\delta$ symbol as follows:\\ If $\{i_1,i_2,\ldots,i_M\}$, $\{j_1,j_2,\ldots,j_M\}$ are two set of indexes, then $$ \delta_{{i_1i_2\ldots i_M},{j_1j_2\ldots j_M}}=1, $$ if $\{i_1,i_2,\ldots,i_M\}$ is even permutation of $\{j_1,j_2,\ldots,j_M\}$. $$ \delta_{{i_1i_2\ldots i_M},{j_1j_2\ldots j_M}}=-1, $$ if $\{i_1,i_2,\ldots,i_M\}$ is odd permutation of $\{j_1,j_2,\ldots j_{M}\}$ and $$ \delta_{{i_1i_2\ldots i_M},{j_1j_2\ldots j_M}}=0 $$ in any other case.\\ \\ Let $a=\sum^{N}_{i=1}a_i\omega_i$ and $b=\sum^{N}_{j,k=1}b_{jk}\omega_j\wedge\omega_k$, then $$ (a\wedge b)_{123}=\frac{1}{1!}\frac{1}{2!}\sum^{N}_{i,j,k=1}\delta_{{ijk},{123}}a_jb_{jk}= $$ $$ =\frac{1}{2}[a_1b_{23}\delta_{{123},{123}}+a_1b_{32}\delta_{{132},{123}}+a_2b_{13}\delta_{{213},{123}}+a_2b_{31}\delta_{{231},{123}}+ $$ $$ +a_3b_{12}\delta_{{312},{123}}+a_3b_{21}\delta_{{321},{123}}]= $$ $$ =\frac{1}{2}\left(a_1b_{23}-a_1b_{32}-a_2b_{13}+a_2b_{31}+a_3b_{12}-a_3b_{21}\right). $$ We remark here that we don't use Einstein's index notation.\\ \\ By this way equations (21) give $$ d\omega_j=\sum^{N}_{m=1}\omega_m\wedge\omega_{mj}=\sum^{N}_{m,s=1}q_{mjs}\omega_{m}\wedge\omega_{s}. $$ Hence \begin{equation} d\omega_j=\sum_{m<s}Q_{mjs}\omega_m\wedge\omega_s, \end{equation} with \begin{equation} Q_{mjs}:=rot_{ms}(q_{mjs})=q_{mjs}-q_{sjm} \end{equation} Hence with the above notation holds $Q_{ikl}+Q_{lki}=0$. Also if we set \begin{equation} R_{ijkl}:=-\sum^{N}_{m=1}rot_{kl}\left(q_{imk}q_{jml}\right), \end{equation} then we have \begin{equation} d\omega_{ij}=\sum_{k<l}R_{ijkl}\omega_k\wedge\omega_l. \end{equation} \\ Using the structure equations of the space $\bf{S}$ we have the next\\ \\ \textbf{Proposition 1.}\\ \begin{equation} \nabla_k\nabla_m(\overline{x})=\nabla_k(\overline{e}_m)=\sum^{N}_{j=1}q_{mjk}\overline{e}_j. \end{equation} \textbf{Proof.}\\ $$ d\overline{e}_i=\sum^{N}_{k=1}\nabla_k(\overline{e}_i)\omega_k\Leftrightarrow \sum^{N}_{k=1}\omega_{ik}\overline{e}_k=\sum^{N}_{k=1}\nabla_k(\overline{e}_i)\omega_k\Rightarrow $$ $$ \sum^{N}_{m=1}\left(\sum^{N}_{k=1}q_{ikm}\overline{e}_k\right)\omega_m=\sum^{N}_{m=1}\nabla_{m}(\overline{e}_i)\omega_m. $$ \\ \textbf{Theorem 2.}\\ For every function $f$ hold the following relations \begin{equation} \nabla_l\nabla_mf-\nabla_m\nabla_lf+\sum^{N}_{k=1}(\nabla_kf)Q_{lkm}=0\textrm{, }\forall\mbox{ } l,m\in\{1,2,\ldots,N\} \end{equation} or equivalently \begin{equation} rot_{lm}\left(\nabla_l\nabla_mf+\sum^{N}_{k=1}(\nabla_kf)q_{lkm}\right)=0. \end{equation} \\ \textbf{Proof.} $$ df=\sum^{N}_{k=1}(\nabla_kf)\omega_k\Rightarrow d(df)=\sum^{N}_{k=1}d(\nabla_kf)\wedge\omega_k+\sum^{N}_{k=1}(\nabla_kf)d\omega_k=0 $$ or $$ \sum^{N}_{k=1}\left(\sum^{N}_{s=1}(\nabla_s\nabla_kf)\omega_s\right)\wedge\omega_k+\sum^{N}_{k=1}(\nabla_kf)\left(\sum_{l<m}rot_{lm}(q_{l km})\omega_{l}\wedge\omega_{m}\right)=0 $$ or $$ \sum_{l<m}\left(\nabla_l\nabla_mf-\nabla_m\nabla_lf+\sum^{N}_{k=1}(\nabla_kf)Q_{lkm}\right)\omega_l\wedge\omega_m=0. $$ \\ \textbf{Corollary 1.}\\ If $\lambda_{ij}=-\lambda_{ji}$, $i,j\in\{1,2,\ldots,N\}$ is any antisymetric field, then \begin{equation} \sum^{N}_{i,j=1}\lambda_{ij}\nabla_i\nabla_jf+\sum^{N}_{i,j,k=1}\lambda_{ij}q_{ikj}\nabla_kf=0. \end{equation} \\ \textbf{Theorem 3.} \begin{equation} R_{iklm}=\nabla_l(q_{ikm})-\nabla_{m}(q_{ikl})+\sum^{N}_{s=1}q_{iks}Q_{lsm}=-\sum^{N}_{s=1}rot_{lm}(q_{isl}q_{ksm}) \end{equation} and it holds also \begin{equation} R_{iklm}=-R_{kilm}\textrm{, }R_{iklm}=-R_{ikml}\textrm{, } \end{equation} \\ \textbf{Proof.}\\ We describe the proof. $$ d\omega_{ij}=\sum^{N}_{k=1}d(q_{ijk})\wedge\omega_k+\sum^{N}_{k=1}q_{ijk}d\omega_k= $$ $$ =\sum^{N}_{m<l}\left(\nabla_{m}(q_{ijl})-\nabla_{l}(q_{ijm})+\sum^{N}_{s=1}q_{ijs}Q_{msl}\right)\omega_m\wedge\omega_{l} $$ and $$ d\omega_{ij}=\sum^{N}_{k=1}\omega_{ik}\wedge\omega_{kj} =\sum^{N}_{m<l}rot_{ml}\left(\sum^{N}_{s=1}q_{ism}q_{sjl}\right)\omega_{m}\wedge\omega_{l} $$ Having in mind the above two relations we get the result.\\ \\ \textbf{Note 1.}\\ When exist a field $\Phi_{ij}$ such that $\nabla_{k}\Phi_{ij}=q_{ijk}$ : (a), then from Theorem 2 we have $d(\omega_{ij})=0$ and using Theorem 3: \begin{equation} R_{iklm}=-\sum_{s=1}^{N}rot_{lm}\left(q_{isl}q_{ksm}\right)=0 \end{equation} and in view of (26),(16): \begin{equation} \partial_l\Gamma_{ijk}-\partial_k\Gamma_{ijl}=0. \end{equation} \\ \textbf{Proposition 2.} \begin{equation} \nabla_l\nabla_m(\overline{e}_i)=\sum^{N}_{k=1}\left(\nabla_l\left(q_{ikm}\right)-\sum^{N}_{s=1}q_{ism}q_{ksl}\right)\overline{e}_k \end{equation} Hence \begin{equation} \left\langle \nabla_k^2(\overline{e}_m),\overline{e}_m \right\rangle=-\sum^{N}_{s=1}q_{msk}^2=\textrm{invariant} \end{equation} \begin{equation} T_{ijlm}:=\left\langle \nabla_{l}\nabla_m(\overline{e}_i)-\nabla_m\nabla_{l}(\overline{e}_i),\overline{e}_{j}\right\rangle=\textrm{invariant}, \end{equation} and \begin{equation} T_{ijlm}=rot_{lm}\left(\nabla_l\left(q_{ijm}\right)+\sum^{N}_{s=1}q_{isl}q_{jsm}\right) \end{equation} for all $i,j,l,m\in\{1,2,\ldots,N\}$.\\ \\ \textbf{Proof.}\\ See Lemma 1 below.\\ \\ \textbf{Theorem 4.} \begin{equation} T_{ijlm}=-\sum^{N}_{s=1}q_{ijs}Q_{lsm} \end{equation} and \begin{equation} \nabla_l\left(q_{ijm}\right)-\nabla_m\left(q_{ijl}\right) \end{equation} are invariants.\\ \\ \textbf{Proof.}\\ Use Theorem 3 along with Proposition 2.\\ \\ \textbf{Definition 3.}\\ We construct the differential operator $\Theta^{(1)}_{lm}$, such that for a vector field $\overline{Y}=Y_1\overline{e}_{1}+Y_2\overline{e}_2+\ldots+Y_N\overline{e}_N$ it is \begin{equation} \Theta^{(1)}_{lm}(\overline{Y}):=\nabla_l(Y_m)-\nabla_m(Y_l)+\sum^{N}_{k=1}Y_kQ_{lkm}. \end{equation} \\ \textbf{Theorem 5.}\\ The derivative of a vector $\overline{Y}=Y_1\overline{e}_{1}+Y_2\overline{e}_2+\ldots+Y_N\overline{e}_N$ is \begin{equation} d\overline{Y}=\sum^{N}_{j=1}\left(\sum^{N}_{l=1}Y_{j;l}\omega_l\right)\overline{e}_j \end{equation} where \begin{equation} Y_{j;l}=\nabla_lY_j-\sum^{N}_{k=1}q_{jkl}Y_k=\textrm{invariant}. \end{equation} \\ \textbf{Remark 1.} \begin{equation} \Theta_{lm}^{(1)}\left(\overline{Y}\right)=Y_{m;l}-Y_{l;m} \end{equation} \\ \textbf{Lemma 1.}\\ For every vector field $\overline{Y}$ we have \begin{equation} \left\langle \nabla_k \overline{Y},\overline{e}_l\right\rangle=Y_{l;k}=\textrm{invariant} \end{equation} \\ \textbf{Proof.} $$ \left\langle \nabla_k \overline{Y},\overline{e}_l\right\rangle=\left\langle \nabla_k\left(\sum^{N}_{m=1}Y_m\overline{e}_m\right),\overline{e}_l\right\rangle= $$ $$ =\sum^{N}_{m=1}\left\langle\nabla_k\left(Y_m\right)\overline{e}_m+Y_m\nabla_k\left(\overline{e}_m\right),\overline{e}_l\right\rangle= $$ $$ =\nabla_kY_l+\sum^{N}_{m=1}Y_m\sum^{N}_{s=1}q_{msk}\left\langle\overline{e}_s,\overline{e}_l\right\rangle=\nabla_kY_l+\sum^{N}_{m=1}Y_mq_{mlk}= $$ $$ =\nabla_kY_l-\sum^{N}_{m=1}Y_mq_{lmk}=Y_{l;k}=\textrm{invariant}. $$ \\ \textbf{Proposition 3.}\\ The connections $q_{ijk}$ are invariants.\\ \\ \textbf{Proof.} $$ \left\langle \nabla_k\overline{e}_m,\overline{e}_l\right\rangle=\left\langle\sum^{N}_{j=1}q_{mjk}\overline{e}_j,\overline{e}_l\right\rangle=q_{mlk}. $$ \\ \textbf{Definition 4.}\\ If $\omega=\sum^{N}_{k=1}a_k\omega_k$ is a Pfaff form, then we define \begin{equation} \Theta^{(2)}_{lm}(\omega):=\nabla_{l}a_m-\nabla_{m}a_l+\sum^{N}_{k=1}a_kQ_{lkm} \end{equation} Hence \begin{equation} d\omega=\sum^{N}_{l<m}\Theta^{(2)}_{lm}\left(\omega\right)\omega_{l}\wedge\omega_{m}, \end{equation}
3,980
41,782
en
train
0.69.3
The above derivative operators $X_{m;l}$, $\Theta^{(1)}_{lm}\left(\overline{Y}\right)$ and $\Theta^{(2)}_{lm}(\omega)$ are invariant under all acceptable change of variables. As application of such differentiation are the forms $\omega_{ij}=\sum^{N}_{k=1}q_{ijk}\omega_k$, which we can write \begin{equation} \Theta^{(2)}_{lm}\left(\omega_{ij}\right)=R_{ijlm} \end{equation} and lead us concluding that $R_{ijlm}$ are invariants of $\textbf{S}$. Actually\\ \\ \textbf{Theorem 6.}\\ $R_{ijlm}$ is the curvature tensor of $\textbf{S}$.\\ \\ \textbf{Theorem 7.}\\ If $\omega=\sum^{N}_{k=1}a_{k}\omega_k$, then \begin{equation} d(f\omega)=\sum_{l<m}\left(\left| \begin{array}{cc} \nabla_lf\textrm{ }\nabla_{m}f\\ a_l\textrm{ }\textrm{ }a_m \end{array} \right|+\Theta^{(2)}_{lm}(\omega)f\right)\omega_{l}\wedge\omega_{m}. \end{equation} In particular \begin{equation} d(f\omega_{ij})=\sum_{l<m}\left(\left| \begin{array}{cc} \nabla_lf\textrm{ }\nabla_{m}f\\ q_{ijl}\textrm{ }\textrm{ }q_{ijm} \end{array} \right|+R_{ijlm}f\right)\omega_{l}\wedge\omega_{m}. \end{equation} Also \begin{equation} \Theta^{(2)}_{lm}(f\omega)=\left| \begin{array}{cc} \nabla_lf\textrm{ }\nabla_mf\\ a_l\textrm{ }\textrm{ }a_m \end{array} \right|+f\Theta^{(2)}_{lm}(\omega). \end{equation} If we set \begin{equation} R:=R_1\omega_1+R_2\omega_2+\ldots+R_N\omega_N, \end{equation} then \begin{equation} \Theta^{(2)}_{lm}\left(R\right)=\nabla_lR_m-\nabla_mR_l+\sum^{N}_{k=1}R_kQ_{lkm} \end{equation} and \begin{equation} \sum_{l<m}\Theta^{(2)}_{lm}(\omega)=\sum_{l<m}\left(\nabla_la_m-\nabla_ma_l\right)+\sum^{N}_{k=1}a_kA_k, \end{equation} where the $A_k$ are defined in Deffinition 6 bellow.\\ \\ \textbf{Theorem 8.}\\ We set \begin{equation} \overline{R}:=R_1\overline{e}_1+R_2\overline{e}_2+\ldots+R_N\overline{e}_N, \end{equation} where the $R_k$ are as in Definition 6 below, then \begin{equation} \sum_{l<m}\left(\nabla_lR_m-\nabla_mR_l\right)=\textrm{invariant}. \end{equation} \\ \textbf{Proof.}\\ From Definition 6 below and $\sum^{N}_{k=1}R_kA_k=0$ (relation (76) below), we get \begin{equation} \sum_{l<m}\Theta^{(1)}_{lm}\left(\overline{R}\right)=\sum_{l<m}\left(\nabla_lR_m-\nabla_mR_l+\sum^{N}_{k=1}R_kQ_{lkm}\right) \end{equation} \\ \textbf{Note 2.}\\ \textbf{i)} \begin{equation} \Theta^{(2)}_{lm}(\omega_j)=Q_{ljm}=\textrm{invariant} \end{equation} Hence \begin{equation} \sum_{l<m}\Theta^{(2)}_{lm}(\omega_j)=A_j=\textrm{invariant} \end{equation} \textbf{ii)} If we assume that $\omega=dg=\sum^{N}_{k=1}(\nabla_{k}g)\omega_k$ and use (27) we get \begin{equation} \Theta^{(2)}_{lm}(dg)=0 \end{equation} and hence for all multivariable functions $f,g$ we have the next\\ \\ \textbf{Theorem 9.} \begin{equation} \int_{\partial A}fdg=\sum_{l<m}\int\int_A\left(\nabla_lf\nabla_mg-\nabla_mf\nabla_lg\right)\omega_l\wedge\omega_m \end{equation} \\ \textbf{Proposition 4.}\\ If exists multivariable function $f=f(u_1,u_2,\ldots,u_N)\in\textbf{R}$ such that \begin{equation} \left| \begin{array}{cc} \nabla_1f\textrm{ }\nabla_2f\\ q_{ij1}\textrm{ }q_{ij2} \end{array} \right|=\left| \begin{array}{cc} \nabla_2f\textrm{ }\nabla_3f\\ q_{ij2}\textrm{ }q_{ij3} \end{array} \right|=\ldots=\left| \begin{array}{cc} \nabla_{N-1}f\textrm{ }\nabla_Nf\\ q_{ijN-1}\textrm{ }q_{ijN} \end{array} \right|=0, \end{equation} then exists function $\mu_{ij}$ such that \begin{equation} \int_{\partial A} \mu_{ij}df=\sum_{l<m} \int\int_AR_{ijlm}\omega_{l}\wedge\omega_{m}. \end{equation} \\ \textbf{Proof.}\\ Obviously we can write $$ \frac{\nabla_1f}{q_{ij1}}=\frac{\nabla_2f}{q_{ij2}}=\ldots=\frac{\nabla_Nf}{q_{ijN}}=\frac{1}{\mu_{ij}}. $$ From Theorem 3 we have $$ \nabla_l(q_{ijm})-\nabla_m(q_{ijl})+\sum^{N}_{k=1}q_{ijk}Q_{lkm}=R_{ijlm}. $$ Hence $$ \nabla_{l}(\mu_{ij}\nabla_mf)-\nabla_{m}(\mu_{ij}\nabla_lf)+\sum^{N}_{k=1}(\nabla_kf)\mu_{ij}Q_{lkm}=R_{ijlm}, $$ or equivalently using Theorem 2 $$ \nabla_mf\cdot\nabla_l \mu_{ij}-\nabla_lf\cdot\nabla_{m}\mu_{ij}=R_{ijlm}. $$ Hence from relation (61) (Stokes formula) we get the result.\\ \\ \textbf{Note 3.}\\ Condition (62) is equivalent to say that exist functions $\mu_{ij}$ and $f$ such that \begin{equation} \omega_{ij}=\mu_{ij}df. \end{equation} \\ \textbf{Theorem 10.}\\ If exists function field $F_{ij}$ such that \begin{equation} \sum_{l<m}\sum_{i,j\in I}F_{ij}\Theta^{(1)}_{lm}\left(\overline{x}\omega_{ij}\right)=\overline{0}, \end{equation} then \begin{equation} \overline{x}=\sum^{N}_{k=1}\left(\frac{\sum_{i,j\in I}F_{ij}r_{ijk}}{\sum_{l<m}\sum_{i,j\in I}F_{ij}R_{ijlm}}\right)\overline{e}_k. \end{equation} \\ \textbf{Proof.}\\ From (50) and (47) we have $$ \sum_{l<m}\Theta_{lm}\left(\overline{x}\omega_{ij}\right)=\sum_{l<m}\left| \begin{array}{cc} \nabla_l(\overline{x})\textrm{ }\nabla_m(\overline{x})\\ q_{ijl}\textrm{ }\textrm{ }q_{ijm} \end{array} \right|+\overline{x}\sum_{l<m}R_{ijlm}= $$ $$ =\sum_{l<m}\left| \begin{array}{cc} \overline{e}_l\textrm{ }\textrm{ }\overline{e}_m\\ q_{ijl}\textrm{ }\textrm{ }q_{ijm} \end{array} \right|+\overline{x}\sum_{l<m}R_{ijlm}=-\sum^{N}_{k,s=1}\epsilon_{ks}q_{ijs}\overline{e}_k+\overline{x}\sum_{l<m}R_{ijlm}= $$ \begin{equation} =-\sum^{N}_{k=1}r_{ijk}\overline{e}_k+\overline{x}\sum_{l<m}R_{ijlm}, \end{equation} where the $r_{ijk}$ are defined in (68) and the $\epsilon_{ks}$ are that of Definition 6 below. Hence, if exists function field $F_{ij}$ such that (65) holds, then we get the validity of (66).\\ \\ Above we have set \begin{equation} r_{ijk}=\sum^{N}_{s=1}\epsilon_{ks}q_{ijs}. \end{equation} Also \begin{equation} \left\langle\sum_{l<m}\Theta_{lm}\left(\overline{x}\omega_{ij}\right),\overline{e}_k\right\rangle=-r_{ijk}+w_k\sum_{l<m}R_{ijlm}, \end{equation} where $w_k=\left\langle \overline{x},\overline{e}_k\right\rangle$ is called support function of the hypersurface.\\ \\ Setting the symbols $$ f_k=f_{k}(x_1,x_2,\ldots,x_N)\textrm{, }k=1,2,\ldots,N $$ be such that \begin{equation} \overline{x}=\sum^{N}_{k=1}f_k\overline{e}_{k}, \end{equation} then $$ d(\overline{x})=\sum^{N}_{k=1}df_k\overline{e}_k+\sum^{N}_{k=1}f_kd(\overline{e}_k)=\sum^{N}_{k=1}(df_k)\overline{e}_k+\sum^{N}_{k=1}f_k\sum^{N}_{j=1}\omega_{kj}\overline{e}_j= $$ $$ =\sum^{N}_{k=1}(df_k)\overline{e}_k+\sum^{N}_{j=1}f_j\sum^{N}_{k=1}\omega_{jk}\overline{e}_k=\sum^{N}_{k=1}\left(df_k+\sum^{N}_{j=1}f_j\omega_{jk}\right)\overline{e}_k. $$ Hence from (5) and the above relation we get \begin{equation} \omega_k=df_k+\sum^{N}_{j=1}f_j\omega_{jk}. \end{equation} If we use the Pfaff expansion of the differential we get $$ \omega_k=\sum^{N}_{l=1}(\nabla_lf_k)\omega_l+\sum^{N}_{j=1}f_j\sum^{N}_{m=1}q_{jkm}\omega_m $$ Or equivalent $$ \omega_k=\sum^{N}_{l=1}(\nabla_lf_k)\omega_l+\sum^{N}_{l=1}\sum^{N}_{j=1}f_jq_{jkl}\omega_l $$ Hence $$ \nabla_lf_k-\sum^{N}_{j=1}f_jq_{kjl}=\delta_{lk} $$ Or equivalently we conclude that: The necessary conditions such that $f_k$ be the coordinates of the vector $\overline{x}$ (who generates the space $\textbf{S}$), in the moving frame $\overline{e}_{k}$, are $$ f_{k;l}=\delta_{kl}. $$ Hence we get the next\\ \\ \textbf{Theorem 11.}\\ If \begin{equation} f_{k;l}=\delta_{kl}\textrm{, where }k,l\in\{1,2,\ldots,N\}, \end{equation} then we have $$ \overline{x}=\sum^{N}_{k=1}f_k\overline{e}_k+\overline{h}\textrm{, where }d\overline{h}=0 $$ and the opposite.\\ \\
3,188
41,782
en
train
0.69.4
Above we have set \begin{equation} r_{ijk}=\sum^{N}_{s=1}\epsilon_{ks}q_{ijs}. \end{equation} Also \begin{equation} \left\langle\sum_{l<m}\Theta_{lm}\left(\overline{x}\omega_{ij}\right),\overline{e}_k\right\rangle=-r_{ijk}+w_k\sum_{l<m}R_{ijlm}, \end{equation} where $w_k=\left\langle \overline{x},\overline{e}_k\right\rangle$ is called support function of the hypersurface.\\ \\ Setting the symbols $$ f_k=f_{k}(x_1,x_2,\ldots,x_N)\textrm{, }k=1,2,\ldots,N $$ be such that \begin{equation} \overline{x}=\sum^{N}_{k=1}f_k\overline{e}_{k}, \end{equation} then $$ d(\overline{x})=\sum^{N}_{k=1}df_k\overline{e}_k+\sum^{N}_{k=1}f_kd(\overline{e}_k)=\sum^{N}_{k=1}(df_k)\overline{e}_k+\sum^{N}_{k=1}f_k\sum^{N}_{j=1}\omega_{kj}\overline{e}_j= $$ $$ =\sum^{N}_{k=1}(df_k)\overline{e}_k+\sum^{N}_{j=1}f_j\sum^{N}_{k=1}\omega_{jk}\overline{e}_k=\sum^{N}_{k=1}\left(df_k+\sum^{N}_{j=1}f_j\omega_{jk}\right)\overline{e}_k. $$ Hence from (5) and the above relation we get \begin{equation} \omega_k=df_k+\sum^{N}_{j=1}f_j\omega_{jk}. \end{equation} If we use the Pfaff expansion of the differential we get $$ \omega_k=\sum^{N}_{l=1}(\nabla_lf_k)\omega_l+\sum^{N}_{j=1}f_j\sum^{N}_{m=1}q_{jkm}\omega_m $$ Or equivalent $$ \omega_k=\sum^{N}_{l=1}(\nabla_lf_k)\omega_l+\sum^{N}_{l=1}\sum^{N}_{j=1}f_jq_{jkl}\omega_l $$ Hence $$ \nabla_lf_k-\sum^{N}_{j=1}f_jq_{kjl}=\delta_{lk} $$ Or equivalently we conclude that: The necessary conditions such that $f_k$ be the coordinates of the vector $\overline{x}$ (who generates the space $\textbf{S}$), in the moving frame $\overline{e}_{k}$, are $$ f_{k;l}=\delta_{kl}. $$ Hence we get the next\\ \\ \textbf{Theorem 11.}\\ If \begin{equation} f_{k;l}=\delta_{kl}\textrm{, where }k,l\in\{1,2,\ldots,N\}, \end{equation} then we have $$ \overline{x}=\sum^{N}_{k=1}f_k\overline{e}_k+\overline{h}\textrm{, where }d\overline{h}=0 $$ and the opposite.\\ \\ Finally having in mind of (66) we get \begin{equation} f_k=\frac{\sum_{i,j\in I}F_{ij}r_{ijk}}{\sum_{l<m}\sum_{i,j\in I}F_{ij}R_{ijlm}} \end{equation} For two generalized hyper-vectors $F=F_{ij\ldots k}$ and $G=G_{ij\ldots k}$ we define the generalized inner product as \begin{equation} (F,G):=\sum^{N}_{i,j,\ldots,k=1}F_{ij\ldots k}G_{ij\ldots k}. \end{equation} Hence relation (73) can be written as $$ f_k=\sum^{N}_{i,j=1}\frac{F_{ij}}{(F,U)}r_{ijk}, $$ where $$ U:=U_{ij}:=\sum_{l<m}R_{ijlm} $$ Hence $$ f_{k;l}=\sum^{N}_{i,j=1}\nabla_l\left(\frac{F_{ij}}{(F,U)}r_{ijk}\right)-\sum^{N}_{i,j,m=1}\frac{F_{ij}}{(F,U)}r_{ijm}q_{kml}= $$ $$ =\sum^{N}_{i,j=1}\nabla_l\left(\frac{F_{ij}}{(F,U)}\right)r_{ijk}+\sum^{N}_{i,j=1}\frac{F_{ij}}{(F,U)}\nabla_l\left(r_{ijk}\right)-\sum^{N}_{i,j,m=1}\frac{F_{ij}}{(F,U)}r_{ijm}q_{kml}= $$ $$ =\sum^{N}_{i,j=1}\nabla_l\left(\frac{F_{ij}}{(F,U)}\right)r_{ijk}+\sum^{N}_{i,j=1}\frac{F_{ij}}{(F,U)}r_{ij\{k\};l} $$ \\ \textbf{Theorem 12.}\\ If $F_{ij}$ is such that \begin{equation} \sum^{N}_{i,j=1}\nabla_l\left(\frac{F_{ij}}{(F,U)}\right)r_{ijk}+\sum^{N}_{i,j=1}\frac{F_{ij}}{(F,U)}r_{ij\{k\};l}=\delta_{kl}, \end{equation} then exists constant vector $\overline{h}$ such that \begin{equation} \overline{x}=\sum^{N}_{k,i,j=1}\frac{F_{ij}}{(F,U)}r_{ijk}\overline{e}_k+\overline{h}, \end{equation} where $d\overline{h}=0$.\\ \\ \textbf{Theorem 13.}\\ There holds (see note 4 below): \begin{equation} q_{ik\{m\};l}-q_{ik\{l\};m}=R_{iklm}. \end{equation} and $$ b_{\{k\}m,l}=b_{\{k\}l,m},\eqno{(77.1)} $$ where $$ Y_{\{n\}m,l}=\partial_lY_{nm}-\sum^{N}_{j=1}\Gamma_{njl}Y_{jm}\textrm{, }Y_{\{n\}m,l}=\partial_lY_{nm}-\sum^{N}_{j=1}\Gamma_{mjl}Y_{nj}\eqno{(77.2)} $$ and $$ Y_{nm,l}=\partial_lY_{nm}-\sum^{N}_{j=1}\Gamma_{njl}Y_{jm}-\sum^{N}_{j=1}\Gamma_{mjl}Y_{nj}\textrm{, }\ldots\textrm{etc}.\eqno{(77.3)} $$ \\ \textbf{Proof.}\\ Easy\\ \\ \textbf{Note 4.}\\ In general in $t_{ij\ldots \{k\}\ldots m;l}$ the brackets indicate where the differential acts. Hence \begin{equation} t_{ij\ldots \{k\}\ldots m;l}=\nabla_lt_{ij\ldots k\ldots m}-\sum^{N}_{\nu=1}q_{k\nu l}t_{ij\ldots \nu\ldots m} \end{equation} Also we can use more brackets $\{\}$ in the vector. $$ t_{ij\ldots \{k_1\}\ldots\{k_2\}\ldots m;l}= $$ \begin{equation} =\nabla_lt_{ij\ldots k_1\ldots k_2\ldots m}-\sum^{N}_{\nu_1=1}q_{k_1\nu_1 l}t_{ij\ldots \nu_1\ldots k_2\ldots m}-\sum^{N}_{\nu_2=1}q_{k_2\nu_2 l}t_{ij\ldots k_1\ldots\nu_2\ldots m}. \end{equation} In case we change the ''$;$'' with ''$,$'' then we lead to the classical invariant derivative $$ t_{ij\ldots \{k\}\ldots m,l}=\partial_lt_{ij\ldots k\ldots m}-\sum^{N}_{\nu=1}\Gamma_{k\nu l}t_{ij\ldots \nu\ldots m}, $$ ... etc. The two kinds of derivative lead us to the same evaluations. More precicely it holds $$ \sum^{N}_{l=1}t_{ij\ldots \{k\}\ldots m;l}\omega_l=\sum^{N}_{l=1}t_{ij\ldots \{k\}\ldots m,l}du_l $$ \\ \textbf{Theorem 14.}\\ The following forms are invariants of the space $\textbf{S}$:\\ i) The linear element is \begin{equation} I=(ds)^2=\sum^{N}_{k=1}\omega_k^2. \end{equation} ii) The volume element of $\textbf{S}$ is \begin{equation} V=\omega_1\wedge\omega_2\wedge\ldots\wedge\omega_{N-1}\wedge \omega_{N}. \end{equation} The area element of the subspace normal to $\overline{e}_{M}$ is $$ E_{M}=\omega_1\wedge\omega_2\wedge\ldots\wedge\omega_{M-1}\wedge\omega_{M+1}\wedge\ldots\wedge\omega_{N} $$ iii) The second invariant forms are \begin{equation} II_{M}=\left\langle d\overline{x},d\overline{e}_{M}\right\rangle=\sum^{N}_{k=1}\omega_k\omega_{Mk}\textrm{, }M=1,\ldots,N \end{equation} iv) The linear element of $\overline{e}_{M}$ is \begin{equation} III_{M}=(d\overline{e}_M)^2=\left\langle d\overline{e}_{M},d\overline{e}_{M}\right\rangle=\sum^{N}_{k=1}\omega_{Mk}^2 \end{equation} v) The Gauss curvature $K_{M}$ which corresponds to the subspace normal to $\overline{e}_M$ vector, is given from \begin{equation} K_{M}=det\left(\kappa^{\{M\}}_{ij}\right)=\frac{\omega_{M1}\wedge\omega_{M2}\wedge\ldots\wedge\omega_{M(M-1)}\wedge\omega_{M(M+1)}\wedge\ldots\wedge \omega_{MN}}{E_{M}}, \end{equation} where \begin{equation} \kappa^{\{M\}}_{km}:=q_{kMm}. \end{equation} The above forms remain unchanged, in every change of position of $\overline{x}$ and rotation of $\{\overline{e}_j\}_{j=1,2,\ldots,N}$, except for possible change of sign.\\ \\ \textbf{Definition 5.}\\ We define the Beltrami differential operator \begin{equation} \Delta_2f:=\sum^{N}_{l<m}\left( \nabla_{l}^2f+\nabla^2_{m}f+\frac{1}{N-1}\sum^{N}_{i<j} \left|\begin{array}{cc} \nabla_if\textrm{ }\nabla_jf\\ Q_{lim}\textrm{ }Q_{ljm} \end{array}\right|\right). \end{equation} We also call $f$ harmonic if $\Delta_2f=0$.\\ \\ \textbf{Remark 2.}\\ In the particular case of $N=2$ where the space $\textbf{S}$ is a two dimensional surface embeded to $\textbf{E}_3$, we have \begin{equation} \Delta_2A=\nabla_1\nabla_1A+\nabla_2\nabla_2A+q_2\nabla_1A-q_1\nabla_2A, \end{equation} where \begin{equation} d\overline{x}=\omega_1\overline{e}_1+\omega_2\overline{e}_2 \end{equation} and $d\omega_1=q_1\omega_1\wedge\omega_2$ and $d\omega_2=q_2\omega_1\wedge\omega_2$, there exists functions $f,f^{*}$ such that $\nabla_1f=-\nabla_2f^{*}$ and $\nabla_2f=\nabla_1f^{*}$ and $\Delta_2f=\Delta_2f^{*}=0$, ($f,f^{*}$ harmonics). In higher dimensions is not such easy to construct harmonic functions. However we will give here one way of such construction. Before going to study this operator, we simplify the expansion (86) (of the definition of Beltrami differential operator $\Delta_2$). We also generalize it (in a way) as we show below in Definition 8. First we give a definition.\\ \\ \textbf{Definition 6.}\\ Set \begin{equation} A_s:=\sum_{l<m}Q_{lsm}\textrm{, }R_k:=\sum^{N}_{s=1}\epsilon_{ks}A_s \end{equation} and more generally if \begin{equation} a_s=\sum^{N}_{l<m}t_{lsm}\textrm{ and } r_{s}:=\sum^{N}_{k=1}\epsilon_{sk}a_k, \end{equation} where $\epsilon_{ks}:=-1$ if $k<s$, $1$ if $k>s$ and $0$ if $k=s$. Then from identity $\sum^{N_1}_{k,s=1}\epsilon_{ks}f_kf_s=0$, for every choice of $f_k$, we get \begin{equation} \sum^{N}_{k=1}R_kA_k=0\textrm{, }\sum^{N}_{s=1}r_s\alpha_s=0 \end{equation}
3,499
41,782
en
train
0.69.5
From the above definition we have the next\\ \\ \textbf{Theorem 15.} \begin{equation} \Delta_2f=\sum^{N}_{k=1}\left((N-1)\nabla_k^2f-\frac{1}{N-1}\left (\nabla_kf\right)R_k\right). \end{equation} Also \begin{equation} \Delta_2f=(N-1)\sum^{N}_{k=1}\nabla_k^2f-\frac{1}{N-1}\left\langle\overline{\textrm{grad}(f)},\overline{R}\right\rangle. \end{equation} \\ \textbf{Proof.} $$ \Delta_2f=\sum_{l<m}\left(\nabla_l^2f+\nabla_m^2f-\frac{1}{N-1}\sum^{N}_{k,s=1}\left(\nabla_kf\right)\epsilon_{ks}Q_{lsm}\right)= $$ $$ =(N-1)\sum^{N}_{k=1}\nabla_k^2f-\frac{1}{N-1}\sum^{N}_{k=1}\left(\nabla_kf\right)\sum^{N}_{s=1}\epsilon_{ks}A_s= $$ $$ =(N-1)\sum^{N}_{k=1}\nabla_k^2f-\frac{1}{N-1}\sum^{N}_{k=1}\left(\nabla_kf\right)R_k. $$ \\ \textbf{Note 5.}\\ 1) If exists $f$ such $\sum^{N}_{k=1}A_k\omega_k=df$, then $\nabla_kf=A_k$ and $$ \Delta_2f=(N-1)\sum_{k=1}^N\nabla_kA_k. $$ 2) If $h_k$ is any vector such that $\sum^{N}_{k=1}h_k=0$ and $f_k=\nabla_kF$ is such that \begin{equation} (N-1)\nabla_kf_k-\frac{1}{N-1}f_kR_k=h_k\textrm{, }\forall k=1,2,\ldots,N, \end{equation} then $$ \Delta_2F=0. $$ Hence solving first the PDE's $$ (N-1)\nabla_k\Psi-\frac{1}{N-1}\Psi R_k=h_k\textrm{, }\forall k=1,2,\ldots,N,\eqno{(eq1)} $$ $$ \sum^{N}_{k=1}h_k=0,\eqno{(eq2)} $$ we find $N$ solutions $\Psi=f_k$. Then we solve $f_k=\nabla_kF$ and thus we get a solution $F$ of $\Delta_2F=0$.\\ 3) This lead us to define the derivative $D_k$, $k=1,2,\ldots,N$, which acts in every function $f$ as \begin{equation} D_k\left(f\right):=D_kf:=(N-1)\nabla_kf-\frac{1}{N-1}R_k f. \end{equation} Obviously $D_k$ is linear. But $$ d(fg)=gdf+fdg=g\sum^{N}_{k=1}\nabla_kf\omega_k+f\sum^{N}_{k=1}\nabla_kg\omega_k=\sum^{N}_{k=1}\left(g\nabla_kf+f\nabla_kg\right)\omega_k, $$ or equivalently \begin{equation} \nabla_k(fg)=g\nabla_kf+f\nabla_kg. \end{equation} Hence the differential operator $D_k$ acts in the product of $f$ and $g$ as $$ D_k(fg)=(N-1)\nabla_k(fg)-\frac{1}{N-1}fgR_k= $$ $$ =(N-1)f\nabla_kg+(N-1)g\nabla_kf-\frac{1}{N-1}fgR_k=fD_kg+(N-1)g\nabla_kf= $$ $$ =gD_kf+(N-1)f\nabla_kg $$ and finally we have \begin{equation} D_k(fg)=fD_kg+gD_kf+\frac{1}{N-1}f gR_k, \end{equation} \begin{equation} fD_kg-gD_kf=(N-1)(f\nabla_kg-g\nabla_kf). \end{equation} The commutator of $D_k$ and $\nabla_k$ acting to a scalar field is $$ \left[D_k,\nabla_k\right]f =D_k\nabla_kf-\nabla_kD_kf= $$ $$ =(N-1)\nabla_k^2f-\frac{1}{N-1}R_k\nabla_kf-\left((N-1)\nabla_k^2f-\frac{1}{N-1}\nabla_k(R_kf)\right). $$ Hence finally after simplifications \begin{equation} \left[D_k,\nabla_k\right]f =\frac{\nabla_kR_k}{N-1}f. \end{equation} i) If $f=const$, then $$ D_kf=-\frac{1}{N-1}R_kf. $$ ii) The derivative $D_k$ is such that if exists function $f$ with $$ D_kf=0\textrm{, }\forall k=1,2,\ldots,N, $$ then equivalently $$ \nabla_k(\log f)=\frac{1}{(N-1)^2}R_k\textrm{, }\forall k=1,2,\ldots,N. $$ But then \begin{equation} D_kf=0\Leftrightarrow \exists g\left(=(N-1)^2\log f\right):R_k=\nabla_kg\Leftrightarrow R=dg. \end{equation} By these arguments we conclude to the result, that only in specific spaces $\textbf{S}$ exists $f$ such $D_kf=0$, for all $k=1,2,\ldots,N$. Hence we have a definition-theorem:\\ \\ \textbf{Definition 7.}\\ A space is called $\textbf{S}$-$R$ iff exists function $g$ such that $R=dg$.\\ \\ \textbf{Theorem 16.}\\ A space $\textbf{S}$ is $\textbf{S}$-$R$ iff exists $g$ such that $D_kg=0$, for all $k=1,2,\ldots,N$.\\ \\ \textbf{Corollary 2.}\\ In a $\textbf{S}$-$R$ space \begin{equation} \Theta_{lm}(R)=0\textrm{, }\forall l,m\in\{1,2,\ldots,N\}. \end{equation} \textbf{Proof.}\\ It is $R=\sum^{N}_{k=1}R_k\omega_k$ and $\textbf{S}$ is $\textbf{S}$-$R$. Hence $$ \Theta_{lm}(R)=\nabla_lR_m-\nabla_mR_l+\sum^{N}_{k=1}R_kQ_{lkm}. $$ But exists $g$ such that $R_k=\nabla_kg$, for all $k=1,2,\ldots,N$. Hence $$ \Theta_{lm}(R)=\nabla_l\nabla_mg-\nabla_m\nabla_lg+\sum^{N}_{k=1}(\nabla_kg)Q_{lkm}=0. $$ The last equality is due to Theorem 2. Hence the result follows.\\ \\ \textbf{Corollary 3.}\\ In a $\textbf{S}$-$R$ space holds $$ d(fR)=\sum_{l<m}\left(R_m\nabla_lf-R_l\nabla_mf \right)\omega_l\wedge\omega_m.\eqno{(101.1)} $$ \textbf{Proof.}\\ The result is application of Theorem 7.\\ \\ \textbf{Corollary 4.}\\ In every space $\textbf{S}$ holds $$ \sum^{N}_{k=1}A_k(D_kf)=(N-1)\sum^{N}_{k=1}A_k(\nabla_kf).\eqno{(101.2)} $$ \\ \textbf{Theorem 17.}\\ Set $$ Df:=(D_1f)\omega_1+(D_2f)\omega_2+\ldots+(D_Nf)\omega_N,\eqno{(101.3)} $$ then $$ Df=(N-1)df-\frac{1}{N-1}fR.\eqno{(101.4)} $$ Also $$ D\left\langle\overline{V}_1,\overline{V}_2\right\rangle=\left\langle D\overline{V}_1,\overline{V}_2\right\rangle+\left\langle \overline{V}_1,D\overline{V}_2\right\rangle+\frac{R}{N-1}\left\langle \overline{V}_1,\overline{V}_2\right\rangle.\eqno{(101.5)} $$ \\ \textbf{Theorem 18.}\\ If a space $\textbf{S}$ is $\textbf{S}$-$R$, then the PDE $\Delta_2\Psi=\mu \Psi$ have non trivial solution.\\ \\ \textbf{Proof.}\\ If $\textbf{S}$ is $\textbf{S}$-$R$, then from (100) we get that exists $f,g$ such $g=(N-1)^2\log f$ and $R=dg$, $D_jf=0$, $\forall j$. Hence $$ (N-1)\nabla_jf-\frac{1}{N-1}R_jf=0\Rightarrow $$ $$ (N-1)\nabla_j\nabla_jf-\frac{1}{N-1}R_j\nabla_jf-\frac{\nabla_jR_j}{N-1}f=0\Rightarrow $$ $$ \Delta_2f=\frac{f}{N-1}\sum^{N}_{j=1}\nabla_jR_j. $$ \\ \textbf{Theorem 19.}\\ In every space $\textbf{S}$ we have \begin{equation} \overline{Df}:=\sum^{N}_{i=1}(D_kf)\overline{e}_k=(N-1)\overline{\textrm{grad}f}-\frac{1}{N-1}\overline{R}f. \end{equation} \\ \textbf{Note 6.}\\ 1) If $C$ is a curve, then $$ \left(\frac{Df}{ds}\right)_C=(N-1)\left(\frac{df}{ds}\right)_C-\frac{1}{N-1}\left(\frac{dR}{ds}\right)_C f $$ and if $(dR)_C\neq0$, then $$ \left(\frac{Df}{dR}\right)_C=(N-1)\left(\frac{df}{dR}\right)_C-\frac{1}{N-1}f $$ and $\left(\frac{Df}{dR}\right)_C=0$ iff $$ \left(\frac{df}{dR}\right)_C=\frac{1}{(N-1)^2}f. $$ Hence $$ f_C=const\cdot\exp\left(\frac{R_C}{(N-1)^2}\right). $$ Hence if $f=f(x_1,x_2,\ldots,x_N)$ and $C:x_i=x_i(s)$, $s\in\left[a,b\right]$, such $\left(\frac{Df}{ds}\right)_C=0$, then $$ f\left(x_1(s),x_2(s),\ldots,x_N(s)\right)=const\cdot\exp\left(\frac{R\left(x_1(s),x_2(s),\ldots,x_N(s)\right)}{(N-1)^2}\right). $$ 2) Also assuming $C:x_i=x_i(s)$ is a curve in $\textbf{S}$ and $\overline{\nu}$ is the tangent vector of $C$, which is such that $\left\langle \overline{\nu},\overline{\nu}\right \rangle=1\Rightarrow\left\langle \overline{\nu},\frac{d\overline{\nu}}{ds}\right \rangle=0$, then $$ \frac{D\overline{\nu}}{ds}=(N-1)\frac{d\overline{\nu}}{ds}-\frac{1}{N-1}\frac{dR}{ds}\overline{\nu}. $$ Consequently we get $$ \left\langle\frac{D\overline{\nu}}{ds},\frac{d\overline{\nu}}{ds}\right\rangle=(N-1)k(s) $$ and $$ \left\langle\frac{D\overline{\nu}}{ds},\overline{\nu}\right\rangle=-\frac{1}{N-1}\frac{dR}{ds}. $$ But $$ D\left\langle \overline{\nu},\overline{\nu}\right \rangle=D1\Leftrightarrow 2\left\langle D\overline{\nu},\overline{\nu}\right \rangle+\frac{dR}{N-1}=-\frac{dR}{N-1} . $$ Hence \begin{equation} \left\langle D\overline{\nu},\overline{\nu}\right \rangle=-\frac{dR}{N-1}. \end{equation} Hence differentiating we get $$ d\left\langle D\overline{\nu},\overline{\nu}\right \rangle=0. $$ \\ \textbf{Theorem 20.}\\ For every curve $C:x_i=x_i(s)$ of a general space $\textbf{S}$, with $s$ being its normal parameter, the unitary tangent vector $\overline{\nu}$ of $C$ has the property $$ \left\langle \left(\frac{D\overline{\nu}}{ds}\right)_C,\overline{\nu}\right\rangle=-\frac{1}{N-1}\left(\frac{dR}{ds}\right)_C. $$ Moreover we define $$ k^{*}=\left(k^{*}\right)_C:=\left(\frac{1}{\rho_{R}}\right)_C:=\left|\left(\frac{D \overline{\nu}}{ds}\right)_C\right|, $$ then $$ k^{*}(s)=\sqrt{(N-1)^2k^2(s)+\frac{1}{(N-1)^2}\left(\left(\frac{dR}{ds}\right)_C\right)^2}. $$ If the space is $\textbf{S}$-$R$, then $$ \left(k^{*}\right)_C=(N-1)k(s). $$ \\ \textbf{Corollary 5.}\\ Let $C:x_i=x_i(s)$ be a curve of a $\textbf{S}$-$R$ space. If $s$ is the normal parameter of $C$ and $\overline{\nu}$ is the unitary tangent vector of $C$, then $$ \frac{D\overline{\nu}}{ds}=(N-1)\frac{d\overline{\nu}}{ds}\textrm{ and }\left|\frac{D\overline{\nu}}{ds}\right|=(N-1)k(s), $$ where $k(s)$ is the curvature of $C$.\\ \\
3,820
41,782
en
train
0.69.6
Next we generalize the deffinition of $\Delta_2$ operator:\\ \\ \textbf{Definition 8.}\\ \begin{equation} \Delta^{(\lambda)}(f)=\sum^{N}_{i,j=1}\lambda_{ij}D_i\nabla_jf. \end{equation} Hence if $\lambda_{ij}=\delta_{ij}$, then \begin{equation} \Delta_2f=\sum^{N}_{i,j=1}\delta_{ij}D_i\nabla_jf=\sum^{N}_{k=1}D_k\nabla_kf. \end{equation} \\ \textbf{Theorem 21.}\\ If $\lambda_{ij}=\epsilon_{ij}$, then \begin{equation} \Delta^{(\epsilon)}f=\sum^{N}_{k=1}\left((N-1)A_k+\frac{1}{N-1}\sum^{N}_{s=1}\epsilon_{ks}R_s\right)\nabla_kf. \end{equation} \\ \textbf{Proof.}\\ For $\lambda_{ij}=\epsilon^{*}_{ij}=-\epsilon_{ij}$ we have $$ \Delta^{(\epsilon^{*})}f=\sum^{N}_{i,j=1}\epsilon^{*}_{ij}D_i\nabla_jf=\sum_{i<j}\epsilon^{*}_{ij}D_i\nabla_{j}f+\sum_{i>j}\epsilon^{*}_{ij}D_i\nabla_jf= $$ $$ =\sum_{i<j}\left(D_i\nabla_jf-D_j\nabla_if\right)= $$ $$ =(N-1)\sum_{i<j}\left(\nabla_i\nabla_jf-\nabla_j\nabla_if\right)-\frac{1}{N-1}\sum_{i<j}\left(R_i\nabla_jf-R_j\nabla_if\right)= $$ $$ =-(N-1)\sum_{i<j}\sum^{N}_{k=1}(\nabla_kf)Q_{ikj}-\frac{1}{N-1}\sum^{N}_{k,s=1}\epsilon_{ks}R_s\nabla_kf= $$ $$ =-(N-1)\sum^{N}_{k=1}A_k\nabla_kf-\frac{1}{N-1}\sum^{N}_{k,s=1}\epsilon_{ks}R_s\nabla_kf= $$ $$ =-\sum^{N}_{k=1}\left((N-1)A_k+\frac{1}{N-1}\sum^{N}_{s=1}\epsilon_{ks}R_s\right)\nabla_kf. $$ $qed$\\ \\ \textbf{Example 1.}\\ If $\lambda_{ij}=\epsilon_{ij}$ and $\textbf{S}$ is $\textbf{S}$-$R$, then \begin{equation} \Delta^{(\epsilon)}g=0, \end{equation} where $dg=R$. That is because $$ \Delta^{(\epsilon)}g=(N-1)\sum^{N}_{k=1}A_kR_k+\frac{1}{N-1}\sum^{N}_{k,s=1}\epsilon_{ks}R_kR_s=0, $$ since $\sum^{N}_{k=1}A_kR_k=0$ and $\sum^{N}_{k,s=1}\epsilon_{ks}R_kR_s=0$. Also easily we get \begin{equation} \left\langle\Delta^{(\epsilon)}\overline{x},\overline{R}\right\rangle=0. \end{equation} \\ \textbf{Example 2.}\\ In a $\textbf{S}$-$R$ space we have $R=dg$, for a certain $g$ and thus $\nabla_kg=R_k$. Hence we can write \begin{equation} \Delta_2g=(N-1)\sum^{N}_{k=1}\nabla_kR_k-\frac{1}{N-1}\sum^{N}_{k=1}R_k^2. \end{equation} \\ \textbf{Theorem 22.}\\ In the general case when $\lambda_{ij}$ is any field we can write \begin{equation} \Delta^{(\lambda)}f=(N-1)\sum^{N}_{i,j=1}\lambda_{ij}\nabla_{i}\nabla_{j}f-\frac{1}{N-1}\sum^{N}_{i,j=1}\lambda_{ij}R_{i}\nabla_{j}f. \end{equation} and \begin{equation} \left\langle\Delta^{(\lambda)}\overline{x},\overline{e}_k\right\rangle=(N-1)\sum_{i,j=1}^N\lambda_{ij}q_{jki}-\frac{1}{N-1}\sum^{N}_{i=1}\lambda_{ik}R_i. \end{equation} \\ \textbf{Proof.}\\ The first identity is easy. For the second we have: Setting $f\rightarrow \overline{x}$, we get $$ \Delta^{(\lambda)}\overline{x}=\sum^{N}_{i,j=1}\lambda_{ij}D_i\left(\nabla_j\overline{x}\right)=\sum^{N}_{i,j=1}\lambda_{ij}D_i\left(\overline{e}_j\right)= $$ $$ =\sum^{N}_{i,j=1}(N-1)\lambda_{ij}\nabla_i\overline{e}_j-\sum^{N}_{i,j=1}\frac{\lambda_{ij}}{N-1}R_i\overline{e}_j= $$ $$ =\sum^{N}_{i,j=1}\lambda_{ij}\left((N-1)\nabla_i\overline{e}_j-\frac{1}{N-1}R_i\overline{e}_j\right)= $$ $$ =\sum^{N}_{i,j,k=1}(N-1)\lambda_{ij}q_{jki}\overline{e}_k-\sum^{N}_{i,k=1}\frac{\lambda_{ik}}{N-1}R_i\overline{e}_k $$ $$ =\sum^{N}_{k=1}\left((N-1)\sum^{N}_{i,j=1}\lambda_{ij}q_{jki}-\frac{1}{N-1}\sum^{N}_{i=1}\lambda_{ik}R_i\right)\overline{e}_k. $$ \\
1,547
41,782
en
train
0.69.7
We need some notation to proceed further\\ \\ \textbf{Definition 9.}\\ For any field $\lambda_{ij}$ we define $$ R^{\{\lambda\}}_k:=\sum^{N}_{i=1}\lambda_{ik}R_{i}\eqno{(111.1)} $$ $$ A^{(\lambda)}_{k}:=\sum_{i<j}\lambda_{ij}Q_{ikj}\textrm{, }A^{{(\lambda)}{*}}_k:=\sum_{i<j}\lambda_{ij}Q^{*}_{ikj},\eqno{(111.2)} $$ where $$ Q_{ikj}:=q_{ikj}-q_{jki}\textrm{, }Q^{*}_{ikj}:=q_{ikj}+q_{jki}\eqno{(111.3)} $$ and $$ R^{(\lambda)}_{k}:=\sum^{N}_{s=1}\epsilon_{ks}A^{(\lambda)}_s\textrm{, }R^{{(\lambda)}{*}}_{k}:=\sum^{N}_{s=1}\epsilon_{ks}A^{{(\lambda)}{*}}_s.\eqno{(111.4)} $$ \\ \textbf{Proposition 5.} $$ \sum^{N}_{s=1}A^{(\lambda)}_sR^{(\lambda)}_s=0\textrm{, }\sum^{N}_{s=1}A^{{(\lambda)}{*}}_sR^{{(\lambda)}{*}}_s=0.\eqno{(111.5)} $$ \\ \textbf{Theorem 23.}\\ Let $\lambda_{ij}$ be antisymmetric i.e. $\lambda_{ij}=-\lambda_{ji}$, then we have in general:\\ 1) $$ \Delta^{(\lambda)}f=-\sum^{N}_{k=1}\left((N-1)A^{(\lambda)}_k+\frac{1}{N-1}\sum^{N}_{i=1}\lambda_{ik}R_i\right)\nabla_kf. $$ 2) $$ \left\langle\Delta^{(\lambda)}\overline{x},\overline{e}_k\right\rangle=-(N-1)A^{(\lambda)}_k-\frac{1}{N-1}\sum^{N}_{i=1}\lambda_{ik}R_i. $$ 3) $$ \sum^{N}_{k=1}R_kR^{\{\lambda\}}_k=0\Leftrightarrow \left\langle \overline{R},\overline{R}^{\{\lambda\}}\right\rangle=0. $$ Hence $$ \left\langle \Delta^{(\lambda)}\overline{x},\overline{R}\right\rangle=-(N-1)\sum^{N}_{k=1}A^{(\lambda)}_kR_k $$ and $$ \left\langle \Delta^{(\lambda)}\overline{x},\overline{R}^{(\lambda)}\right\rangle=-\frac{1}{N-1}\sum^{N}_{i,k=1}\lambda_{ik}R_iR^{(\lambda)}_k. $$ 4) In case the space is $\textbf{S}$-$R$, with $\nabla_kg=R_k$, then $$ \Delta^{(\lambda)}g=-(N-1)\sum^{N}_{k=1}A^{(\lambda)}_kR_k=\left\langle \Delta^{(\lambda)}\overline{x},\overline{R}\right\rangle. $$ \\ \textbf{Proof.}\\ The case of (1) follows with straight forward evaluation. Since $\lambda_{ij}$ is antisymmetric, we have $$ \Delta^{(\lambda)}f=\sum_{i<j}\lambda_{ij}D_i\nabla_jf-\sum_{i<j}\lambda_{ij}D_j\nabla_if= $$ $$ =\sum_{i<j}\lambda_{ij}\left[(N-1)\nabla_i\nabla_jf-\frac{1}{N-1}R_i\nabla_jf-\left((N-1)\nabla_j\nabla_if-\frac{1}{N-1}R_j\nabla_if\right)\right] $$ $$ =(N-1)\sum_{i<j}\lambda_{ij}\left(\nabla_i\nabla_jf-\nabla_j\nabla_if\right)-\frac{1}{N-1}\sum_{i<j}\lambda_{ij}\left(R_i\nabla_jf-R_j\nabla_if\right)= $$ $$ =(N-1)\sum_{i<j}\lambda_{ij}\left(-\sum^{N}_{k=1}\nabla_kfQ_{ikj}\right)-\frac{1}{N-1}\sum_{i<j}\lambda_{ij}rot_{ij}(R_i\nabla_jf)= $$ $$ =-(N-1)\sum^{N}_{k=1}A^{(\lambda)}_k\nabla_kf-\frac{1}{N-1}\sum_{i<j}\lambda_{ij}rot_{ij}\left(R_i\nabla_jf\right)= $$ $$ =-(N-1)\sum^{N}_{k=1}A^{(\lambda)}_k\nabla_kf-\frac{1}{N-1}\sum^{N}_{k=1}\left(\sum^{N}_{i=1}\lambda_{ik}R_i\right)\nabla_kf= $$ $$ =-\sum^{N}_{k=1}\left((N-1)A^{(\lambda)}_k+\frac{1}{N-1}\sum^{N}_{i=1}\lambda_{ik}R_i\right)\nabla_kf. $$ In the second we use $\nabla_k\overline{x}=\overline{e}_k$.\\ The third relation can be proved if we consider the formula $$ \sum^{N}_{i,k=1}\lambda_{ik}R_iR_k=\sum_{i<k}\lambda_{ik}R_{i}R_k-\sum_{i<k}\lambda_{ik}R_kR_i=0. $$ The fourth case follows easily from the first with $\nabla_kg=R_k$. Note that in the most general case where $\lambda_{ij}$ is arbitrary we still have $$ \sum^{N}_{k=1}A^{(\lambda)}_kR^{(\lambda)}_k=0. $$ \\ \textbf{Theorem 24.}\\ In case $\lambda_{ij}$ is any antisymmetric field, we have \begin{equation} \Delta^{(\lambda)}f=\sum^{N}_{k=1}\left\langle\Delta^{(\lambda)}\overline{x},\overline{e}_{k}\right\rangle\nabla_kf=\left\langle\Delta^{(\lambda)}\overline{x},\overline{\textrm{grad}(f)}\right\rangle, \end{equation} where \begin{equation} \overline{\textrm{grad}(f)}:=\sum^{N}_{k=1}(\nabla_kf)\overline{e}_k. \end{equation} \\ \textbf{Proof.}\\ If $\lambda_{ij}$ is any antisymmetric field, then using Theorems 22, 23 (see also and Theorem 2 and Corollary 1) we get the result.\\ \\ \textbf{Corollary 6.}\\ If $\lambda_{ij}$ is any antisymmetric field such that \begin{equation} \Delta^{(\lambda)}\left(\overline{x}\right)=\overline{0}, \end{equation} then for every $f$ we have \begin{equation} \Delta^{(\lambda)}f=0. \end{equation} \\ \textbf{Remark 3.}\\The above corollary is very strange. The Beltrami operator over an antisymmetric field is zero for every function $f$ when (114) holds. This force us to conclude that in any space the vector \begin{equation} \overline{\sigma}^{(\lambda)}:=\sum^{N}_{k=1}\sigma^{(\lambda)}_k\overline{e}_k=\Delta^{(\lambda)}\left(\overline{x}\right) \end{equation} must play a very prominent role in the geometry of $\textbf{S}$. Then (in any space): \begin{equation} \sigma^{(\lambda)}_k=(N-1)\sum_{i,j=1}^N\lambda_{ij}q_{jki}-\frac{1}{N-1}\sum^{N}_{i=1}\lambda_{ik}R_i. \end{equation} In case $\lambda_{ij}$ is antisymmetric, then we get $$ \sigma^{(\lambda)}_k=-(N-1)A_k^{(\lambda)}-\frac{1}{N-1}\sum^{N}_{i=1}\lambda_{ik}R_i= $$ $$ =-(N-1)A_k^{(\lambda)}-\frac{1}{N-1}\sum^{N}_{l=1}\epsilon^{(\lambda)}_{kl}A_l, $$ where \begin{equation} \epsilon^{(\lambda)}_{kl}:=\sum^{N}_{i=1}\lambda_{ik}\epsilon_{il}. \end{equation} Hence when $\lambda_{ij}$ is antisymmetric, then $$ \sigma^{(\lambda)}_{k}=-(N-1)A_k^{(\lambda)}-\frac{1}{N-1}\sum^{N}_{l=1}\epsilon^{(\lambda)}_{kl}A_l. $$ Hence \begin{equation} \sigma^{(\lambda)}_{k}=-\sum^{N}_{i<j}\left[(N-1)\lambda_{ij}Q_{ikj}+\frac{1}{N-1}\sum^{N}_{l=1}\epsilon^{(\lambda)}_{kl}Q_{ilj}\right]. \end{equation} \\ \textbf{Definition 10.}\\ We define \begin{equation} \Lambda^{(\lambda)}_{kij}:=(N-1)\lambda_{ij}Q_{ikj}+\frac{1}{N-1}\sum^{N}_{l=1}\epsilon^{(\lambda)}_{kl}Q_{ilj} \end{equation} and \begin{equation} M^{(\lambda)}_{kij}:=(N-1)\lambda_{ij}Q^{*}_{ikj}-\frac{1}{N-1}\sum^{N}_{l=1}\epsilon^{(\lambda)}_{kl}Q_{ilj}. \end{equation} Also we define the mean curvature with respect to the $\lambda_{ij}$ as \begin{equation} H^{(\lambda)}_k:=(N-1)\sum^{N}_{i,j=1}\lambda_{ij}q_{jki}. \end{equation} Then we have \begin{equation} H^{(\lambda^{+})}_k=\frac{N-1}{2}\sum^{N}_{i,j=1}\lambda_{ij}Q^{*}_{ikj} \end{equation} and \begin{equation} H^{(\lambda^{-})}_k=-\frac{N-1}{2}\sum^{N}_{i,j=1}\lambda_{ij}Q_{ikj}, \end{equation} where $\lambda^{+}$ is the symmetric part of $\lambda_{ij}$ and $\lambda^{-}$ is the antisymmetric part of $\lambda_{ij}$.\\ \\ \textbf{Remark.}\\ It holds \begin{equation} \Lambda_{kij}^{(\lambda)}+M_{kij}^{(\lambda)}=2(N-1)\lambda_{ij}q_{ikj} \end{equation} \\ \textbf{Theorem 24.1}\\ In any space and any $\lambda_{ij}$, we have $$ \Delta^{(\lambda)}\left(\overline{x}\right)=\sum^{N}_{k=1}\left(H^{(\lambda)}_k-\frac{1}{N-1}R^{\{\lambda\}}_k\right)\overline{e}_k.\eqno{(125.1)} $$ Hence also $$ \Delta^{(\lambda)}\left(\overline{x}\right)=\overline{H}^{(\lambda)}-\frac{1}{N-1}\overline{R}^{\{\lambda\}},\eqno{(125.2)} $$ where (from Deffinition 9): $$ \overline{R}^{\{\lambda\}}=\sum^{N}_{k=1}R^{\{\lambda\}}_k\overline{e}_k. $$ \\ \textbf{Proof.}\\ Easy from the above.\\ \\ \textbf{Theorem 25.}\\ If $\lambda_{ij}$ is antisymmetric we have \begin{equation} \sigma^{(\lambda)}_{k}=-\sum^{N}_{i<j}\Lambda^{(\lambda)}_{kij}. \end{equation} If $\lambda_{ij}=\lambda^{+}_{ij}+\lambda^{-}_{ij}$, then $$ \sigma^{(\lambda)}_{k}=\sum^{*}_{i\leq j}M^{(\lambda^+)}_{kij}-\sum_{i<j}\Lambda^{(\lambda^-)}_{kij}= $$ \begin{equation} =\sum_{i<j}M^{(\lambda^+)}_{kij}-\sum_{i<j}\Lambda^{(\lambda^-)}_{kij}+(N-1)\sum^{N}_{i=1}\lambda_{ii}q_{iki}, \end{equation} where the asterisk on the summation means that when $i=j$ we must multiply the summands with $\frac{1}{2}$.\\ \\ \textbf{Corollary 7.}\\ 1) We have \begin{equation} \Delta^{(\lambda)}\overline{x}=\overline{0}, \end{equation} iff for all $k=1,2,\ldots,N$ we have \begin{equation} H^{(\lambda)}_k-\frac{1}{N-1}R^{\{\lambda\}}_k=0. \end{equation} 2) For every $\lambda_{ij}$ we have $$ R^{\{\lambda\}}_k=\sum^{N}_{l,s=1}\epsilon_{ls}\lambda_{lk}\sum_{i<j}Q_{isj}.\eqno{(129.1)} $$ \\ \textbf{Proof.}\\Actually then we have \begin{equation} \sigma_k^{(\lambda)}=H^{(\lambda)}_k-\frac{1}{N-1}\sum^{N}_{i=1}\lambda_{ik}R_i, \end{equation} where \begin{equation} H^{(\lambda)}_k=(N-1)\sum^{N}_{i,j=1}\lambda_{ij}q_{jki}=H^{(\lambda^{+})}_k+H^{(\lambda^{-})}_k. \end{equation} In case $\lambda_{ij}=g_{ij}$ is the metric tensor, then we write \begin{equation} H_k:=H^{(g)}_k=(N-1)\sum^{N}_{i,j=1}g_{ij}q_{ikj} \end{equation} and call $H_k$ mean curvature of the surface $\textbf{S}$.\\ \\ \textbf{Theorem 26.}\\ If $\lambda_{ij}=g_{ij}$ is the metric tensor, then $\Delta^{(g)}\overline{x}=\overline{0}$ iff \begin{equation} H_{k}-\frac{1}{N-1}\sum^{N}_{i=1}g_{ik}R_i=0. \end{equation} \\ \textbf{Corollary 8.}\\ If $\lambda_{ij}$ is antisymmetric, then $$ \Delta^{(\lambda)}(fg)=f\Delta^{(\lambda)}g+g\Delta^{(\lambda)}f $$ \\ \textbf{Proof.}\\ Use Theorem 24 with $$ \overline{\textrm{grad}(fg)}=f\overline{\textrm{grad}(g)}+g\overline{\textrm{grad}(f)}. $$ \\ \textbf{Theorem 27.}\\ Assume that $\lambda_{ij}=\epsilon_{ij}-$antisymmetric. Then \begin{equation} \Delta^{(\epsilon)}\overline{x}=\overline{0}\Leftrightarrow (N-1)A_k+\frac{1}{N-1}\sum^{N}_{l=1}\epsilon^{(2)}_{kl}A_l=0, \end{equation} where \begin{equation} \epsilon^{(2)}_{kl}=\sum^{N}_{i=1}\epsilon_{ik}\epsilon_{il}. \end{equation} For every space $\textbf{S}$ with $A_{i}$ as above and for every function $f$, we have \begin{equation} \Delta^{(\epsilon)}f=0. \end{equation} \\ \textbf{Proof.}\\ Use Theorem 23 with $\lambda_{ij}=\epsilon_{ij}$ and then Definitions 9,6.\\ \\ \textbf{Remark 5.} From the above propositions we conclude that in every space $\textbf{S}$ we have at least one antisymmetric field (the $\lambda_{ij}=\epsilon_{ij}$) that under condition \begin{equation} (N-1)A_k+\frac{1}{N-1}\sum^{N}_{l=1}\epsilon^{(2)}_{kl}A_l=0\Leftrightarrow \sum^{N}_{k=1}\Lambda^{(\epsilon)}_{kij}=0, \end{equation} we have \begin{equation} \Delta^{(\epsilon)}\left(f\right)=0\textrm{, }\forall f. \end{equation} Hence in every space $\textbf{S}$ the quantity \begin{equation} \sigma^{(\epsilon)}_k:=(N-1)A_k+\frac{1}{N-1}\sum^{N}_{l=1}\epsilon^{(2)}_{kl}A_l\textrm{, }k=1,2,\ldots,N \end{equation} is important. More general the quantities $\sigma^{(\lambda)}_k$ of (119) with $\lambda_{ij}$ antisymmetric are of extreme interest.\\ \\ \textbf{Corollary 9.}\\ If $\lambda_{ij}$ is antisymmetric, then \begin{equation} \Delta^{(\lambda)}f=\frac{1}{(N-1)^2}\left\langle \Delta^{(\lambda)}\overline{x},\overline{R}\right\rangle f+\frac{1}{N-1}\left\langle \overline{Df},\Delta^{(\lambda)}\overline{x}\right\rangle. \end{equation} \\ \textbf{Proof.}\\ It holds $$ \overline{Df}=(N-1)\overline{\textrm{grad} f}-\frac{1}{N-1}\overline{R}f. $$ Hence $$ \left\langle\overline{Df},\Delta^{(\lambda)}\overline{x}\right\rangle=(N-1)\left\langle\overline{\textrm{grad}f},\Delta^{(\lambda)}\overline{x}\right\rangle-\frac{1}{N-1}\left\langle \Delta^{(\lambda)}\overline{x},\overline{R}\right\rangle f. $$ Now since $\lambda_{ij}$ is antisymmetric we have from Theorem (24) the result.\\ \\ \textbf{Corollary 10.}\\ If $\lambda_{ij}$ is antisymmetric, then \begin{equation} \left\langle\overline{Df},\Delta^{(\lambda)}\overline{x}\right\rangle=0\Leftrightarrow\Delta^{(\lambda)}f=\frac{1}{(N-1)^2}\left\langle \Delta^{(\lambda)}\overline{x},\overline{R}\right\rangle f. \end{equation} In particular \begin{equation} \overline{Df}=\overline{0}\Rightarrow\Delta^{(\lambda)}f=\frac{1}{(N-1)^2}\left\langle \Delta^{(\lambda)}\overline{x},\overline{R}\right\rangle f. \end{equation} \\ \textbf{Corollary 11.}\\If in a space $\textbf{S}$, the $\lambda_{ij}$ is antisymmetric with \begin{equation} \left\langle \Delta^{(\lambda)}\overline{x},\overline{R}\right\rangle=0, \end{equation} then $$ \left\langle \overline{H}^{(\lambda)}, \overline{R}\right\rangle=0 $$ and for every $f$ holds \begin{equation} \overline{Df}=\overline{0}\Rightarrow \Delta^{(\lambda)}f=0. \end{equation} \\ \textbf{Remark.}\\ If $\textbf{S}$ is $S$-$R$, then from Example 1, pg.23, we have $\left\langle \Delta^{(\epsilon)}\overline{x},\overline{R}\right\rangle=0$. Hence in a $S$-$R$ space we have $$ \Delta^{(\epsilon)}f=\frac{1}{N-1}\left\langle \overline{Df},\Delta^{(\epsilon)}\overline{x}\right\rangle\eqno{(144.1)} $$ and the equation \begin{equation} \left\langle\overline{Df},\Delta^{(\epsilon)}\overline{x}\right\rangle=0\textrm{ is equivalent to }\Delta^{(\epsilon)}f=0. \end{equation} In particular if $\overline{Df}=\overline{0}$, then $\Delta^{(\epsilon)}f=0$.\\ \\ \textbf{Theorem 27.1}\\ If $\lambda_{ij}$ is antisymmetric and \begin{equation} \Delta^{(\lambda)}\overline{x}=\sum^{N}_{k=1}\sigma^{(\lambda)}_k\overline{e}_{k}, \end{equation} then \begin{equation} \Delta^{(\lambda)}f=\sum^{N}_{k=1}\sigma^{(\lambda)}_{k}\nabla_{k}f \end{equation} \\ \textbf{Lemma 2.}\\ If $\lambda_{ij}$ is symmetric, then \begin{equation} \Delta^{(\lambda)}f=(N-1)\sum_{i\leq j}^{*}\lambda_{ij}\nabla_i\nabla_jf+\sum^{N}_{k=1}\left((N-1)A^{(\lambda)}_k-\frac{1}{N-1}\sum^{N}_{i=1}\lambda_{ik}R_i\right)(\nabla_kf), \end{equation} where the asterisk on the summation means that when $i<j$ the summands are multiplied with 2 and when $i=j$ with 1.\\ \\ \textbf{Proof.}\\ Assume that $\lambda_{ij}$ is symmetric, then we can write $$ \Delta^{(\lambda)}f=\sum^{N}_{i,j=1}\lambda_{ij}D_i\nabla_jf= $$ $$ =\sum^{N}_{k=1}\lambda_{kk}D_k\nabla_kf+\sum_{i<j}\lambda_{ij}\left(D_{i}\nabla_jf+D_j\nabla_if\right). $$ But $$ D_i\nabla_jf+D_j\nabla_if=(N-1)\nabla_i\nabla_jf-\frac{R_i\nabla_jf}{N-1}+(N-1)\nabla_j\nabla_if-\frac{R_j\nabla_if}{N-1}= $$ $$ =(N-1)\left(\nabla_i\nabla_jf+\nabla_j\nabla_if\right)-\frac{1}{N-1}\left(R_i\nabla_jf+R_j\nabla_if\right)= $$ $$ (N-1)\left(2\nabla_i\nabla_jf+\sum^{N}_{k=1}\nabla_kfQ_{ikj}\right)-\frac{1}{N-1}\left(R_i\nabla_jf+R_j\nabla_if\right). $$ Hence $$ \sum_{i<j}\lambda_{ij}\left(D_{i}\nabla_jf+D_j\nabla_if\right)= $$ $$ =2(N-1)\sum_{i<j}\lambda_{ij}\nabla_i\nabla_jf+(N-1)\sum^{N}_{k=1}(\nabla_kf)\sum_{i<j}\lambda_{ij}Q_{ikj}- $$ $$ -\frac{1}{N-1}\sum^{N}_{i,k=1}\lambda_{ik}R_i\nabla_kf+\frac{1}{N-1}\sum^{N}_{k=1}\lambda_{kk}R_k\nabla_kf= $$ $$ =2(N-1)\sum_{i<j}\lambda_{ij}\nabla_i\nabla_jf+(N-1)\sum^{N}_{k=1}A^{(\lambda)}_{k}(\nabla_kf)-\frac{1}{N-1}\sum^{N}_{i,k=1}\lambda_{ik}R_i\nabla_kf+ $$ $$ +\frac{1}{N-1}\sum^{N}_{k=1}\lambda_{kk}R_k\nabla_kf. $$ Also $$ \sum^{N}_{k=1}\lambda_{kk}D_k\nabla_kf=(N-1)\sum^{N}_{k=1}\lambda_{kk}\nabla^2_kf-\frac{1}{N-1}\sum^{N}_{k=1}\lambda_{kk}R_k(\nabla_kf). $$ Hence combining the above we get the first result.\\ \\ \textbf{Theorem 28.}\\ If $\lambda_{ij}$ is symmetric, then \begin{equation} \Delta^{(\lambda)}f=(N-1)\sum^{*}_{i\leq j}\lambda_{ij}\left(\nabla_jf\right)_{;i}+\left\langle \Delta^{(\lambda)}\overline{x},\overline{\textrm{grad}f}\right\rangle, \end{equation} where the asterisk in the sum means that if $i<j$, then the summands are multiplied with 2. In case $i=j$ with 1.\\ \\ \textbf{Proof.}\\ From Lemma we have that if $\lambda_{ij}$ is symmetric, then $$ \Delta^{(\lambda)}f=(N-1)\sum_{i\leq j}^{*}\lambda_{ij}\nabla_i\nabla_jf+\sum^{N}_{k=1}\left((N-1)A^{(\lambda)}_k-\frac{1}{N-1}\sum^{N}_{i=1}\lambda_{ik}R_i\right)(\nabla_kf). $$ But also from Theorem 22 we have $$ \frac{1}{N-1}\sum^{N}_{i=1}\lambda_{ik}R_i=(N-1)\sum^{N}_{i,j=1}\lambda_{ij}q_{ikj}-\left\langle \Delta^{(\lambda)}\overline{x},\overline{e}_k\right\rangle $$ Hence $$ \sum^{N}_{k=1}\left((N-1)A^{(\lambda)}_k-\frac{1}{N-1}\sum^{N}_{i=1}\lambda_{ik}R_i\right)(\nabla_kf)= $$ $$ =(N-1)\sum^{N}_{k=1}\sum_{i<j}\lambda_{ij}Q_{ikj}\nabla_kf-(N-1)\sum^{N}_{i,j,k=1}\lambda_{ij}q_{ikj}\nabla_kf+\sum^{N}_{k=1}\left\langle \Delta^{(\lambda)}\overline{x},\overline{e}_k\right\rangle\nabla_kf= $$ $$ =(N-1)\sum^{N}_{k=1}\nabla_kf\left(\sum_{i<j}\lambda_{ij}q_{ikj}-\sum_{i<j}\lambda_{ij}q_{jki}-\sum_{i<j}\lambda_{ij}q_{ikj}-\sum_{i>j}\lambda_{ij}q_{ikj}\right)- $$ $$ -(N-1)\sum^{N}_{k,i=1}\lambda_{ii}q_{iki}\nabla_kf+\left\langle \Delta^{(\lambda)}\overline{x},\overline{\textrm{grad}f}\right\rangle= $$ $$ =-2(N-1)\sum^{N}_{k=1}\sum_{i<j}\lambda_{ij}q_{jki}\nabla_kf-(N-1)\sum^{N}_{k,i=1}\lambda_{ii}q_{iki}\nabla_kf+\left\langle \Delta^{(\lambda)}\overline{x},\overline{\textrm{grad}f}\right\rangle. $$ Hence $$ \Delta^{(\lambda)}f=2(N-1)\sum_{i< j}\lambda_{ij}\nabla_i\nabla_jf+(N-1)\sum^{N}_{i=1}\lambda_{ii}\nabla_i^2f -2(N-1)\sum_{i<j}\sum^{N}_{k=1}\lambda_{ij}q_{jki}\nabla_kf- $$ $$ -(N-1)\sum^{N}_{k,i=1}\lambda_{ii}q_{iki}\nabla_kf+\left\langle \Delta^{(\lambda)}\overline{x},\overline{\textrm{grad}f}\right\rangle= $$ $$ =(N-1)\sum^{*}_{i\leq j}\lambda_{ij}\left(\nabla_jf\right)_{;i}+\left\langle \Delta^{(\lambda)}\overline{x},\overline{\textrm{grad}f}\right\rangle. $$ \\ \textbf{Definition 11.}\\ Assume that $g_{ij}$ is the metric tensor of the surface $\textbf{S}$. Then \begin{equation} \Delta^{(g)}\overline{x}=\overline{0} \end{equation} Iff \begin{equation} (N-1)g_{ij}Q^{*}_{ikj}-\frac{1}{N-1}\sum^{N}_{l=1}\epsilon^{(g)}_{kl}Q_{ilj}=0, \end{equation} where \begin{equation} \epsilon^{(g)}_{kl}:=\sum^{N}_{i=1}g_{ik}\epsilon_{il}. \end{equation} We call a space $\textbf{S}$, $G-$space iff $\Delta^{(g)}(\overline{x})=\overline{0}$.\\ \\ \textbf{Theorem 29.}\\ If $\textbf{S}$ is a $G-$space, then for all functions $f$, we have \begin{equation} \Delta^{(g)}f=(N-1)\sum^{*}_{i\leq j}g_{ij}\left(\nabla_jf\right)_{;i} \end{equation} \\ \textbf{Theorem 30.}\\ If $\lambda_{ij}$ is any antisymmetric field, then \begin{equation} \sum^{N}_{i,j=1}\lambda_{ij}[D_i,\nabla_j]f+\frac{1}{f}\sum^{N}_{k=1}A^{(\lambda)}_kD_k\left(f^2\right) =\frac{f}{2(N-1)}\sum^{N}_{i,j=1}\lambda_{ij}\Theta^{(2)}_{ij}(R) . \end{equation} In case that $\textbf{S}$ is $\textbf{S}$-$R$, then \begin{equation} \sum^{N}_{i,j=1}\lambda_{ij}[D_i,\nabla_j]f=-\frac{1}{f}\sum^{N}_{k=1}A_k^{(\lambda)}D_k\left(f^2\right). \end{equation} \\ \textbf{Proof.}\\ We first evaluate the bracket. $$ [D_i,\nabla_j]f=D_i\nabla_jf-\nabla_jD_if=(N-1)\nabla_i\nabla_jf-\frac{1}{N-1}R_i\nabla_jf- $$ $$ -\left((N-1)\nabla_j\nabla_if-\frac{1}{N-1}\nabla_j(R_if)\right)= $$ $$ (N-1)\left(\nabla_i\nabla_jf-\nabla_j\nabla_if\right)-\frac{1}{N-1}R_i\nabla_jf+\frac{1}{N-1}(f\nabla_jR_i+R_i\nabla_jf)= $$ $$ =-(N-1)\sum^{N}_{k=1}(\nabla_kf)Q_{ikj}+\frac{f}{N-1}\nabla_jR_i. $$ Hence if we multiply with $\lambda_{ij}$ and sum with respect to $i,j$, we get $$ P=\sum^{N}_{i,j=1}\lambda_{ij}[D_i,\nabla_j]f=-(N-1)\sum^{N}_{k,i,j=1}\lambda_{ij}\nabla_kfQ_{ikj}+\frac{f}{N-1}\sum^{N}_{i,j=1}\lambda_{ij}\nabla_jR_i. $$ Also we have from Theorem 7 and the antisymmetric property of $\lambda_{ij}$: $$ \sum^{N}_{i,j=1}\lambda_{ij}\nabla_jR_i=\frac{1}{2}\sum^{N}_{i,j=1}\lambda_{ij}(\nabla_jR_i-\nabla_iR_j)=-\frac{1}{2}\sum^{N}_{i,j,k=1}\lambda_{ij}R_kQ_{jki}+ $$ $$ +\frac{1}{2}\sum^{N}_{i,j=1}\lambda_{ij}\Theta^{(2)}_{ij}(R) =\frac{1}{2}\sum^{N}_{i,j,k=1}\lambda_{ij}R_kQ_{ikj}+\frac{1}{2}\sum^{N}_{i,j=1}\lambda_{ij}\Theta^{(2)}_{ij}(R). $$ Hence combining the above two results, we get $$ P=-(N-1)\sum^{N}_{k,i,j=1}\lambda_{ij}\nabla_kfQ_{ikj}+\frac{f}{2(N-1)}\sum^{N}_{i,j,k=1}\lambda_{ij}R_kQ_{ikj}+ $$ $$ +\frac{f}{2(N-1)}\sum^{N}_{i,j=1}\lambda_{ij}\Theta^{(2)}_{ij}(R)\Rightarrow $$ $$ fP=-\frac{1}{2}\sum^{N}_{k,i,j=1}\lambda_{ij}Q_{ikj}\left((N-1)\nabla_k\left(f^2\right)-\frac{R_kf^2}{N-1}\right)+ $$ $$ +\frac{f^2}{2(N-1)}\sum^{N}_{i,j=1}\lambda_{ij}\Theta_{ij}^{(2)}(R) $$ and the result follows.\\ \\ \textbf{Corollary 11.}\\ If $\lambda_{ij}$ is antisymmetric, then \begin{equation} \sum_{i<j}\lambda_{ij}[D_i,\nabla_j]f=-(N-1)\sum^{N}_{k=1}A_k^{(\lambda)}\nabla_kf+\frac{f}{N-1}\sum_{i<j}\lambda_{ij}\nabla_jR_i. \end{equation} \\ \textbf{Theorem 31.}\\ If $\lambda_{ij}=\lambda^{+}_{ij}+\lambda^{-}_{ij}$, is any field, then $$ \Pi^{(\lambda)} f:=\sum^{N}_{i,j=1}\lambda_{ij}\left[D_i,\nabla_j\right]f=\frac{1}{f(N-1)}\sum^{N}_{k=1}H^{(\lambda^{-})}_kD_k\left(f^2\right)+ $$ \begin{equation} +\frac{f}{2(N-1)}\sum^{N}_{i,j=1}\lambda^{-}_{ij}\Theta^{(2)}_{ij}(R) +\frac{f}{N-1}\sum^{N}_{i,j=1}\lambda^{+}_{ij}\nabla_jR_i. \end{equation} \\ \textbf{Corollary 12.}\\ If $\lambda_{ij}$ is any field and $\overline{D\left(f^2\right)}=\overline{0}$, then exists $\mu$ independed of $f$ such \begin{equation} \Pi^{(\lambda)}f=\mu f. \end{equation} In particular \begin{equation} \mu=\frac{1}{2(N-1)}\sum^{N}_{i,j=1}\lambda^{-}_{ij}\Theta^{(2)}_{ij}(R) +\frac{1}{N-1}\sum^{N}_{i,j=1}\lambda^{+}_{ij}\nabla_jR_i. \end{equation} \\ \textbf{Theorem 32.}\\ If $\lambda_{ij}$ is symmetric, then $\Pi^{(\lambda)}$ is simplified considerably \begin{equation} \Pi^{(\lambda)}f=\frac{f}{N-1}\sum^{N}_{i,j=1}\lambda_{ij}\nabla_jR_i. \end{equation} \\ \textbf{Theorem 33.}\\ If $\lambda_{ij}$ is any symmetric field, then for every function $f$ we have \begin{equation} \Pi^{(\lambda)}f=0\textrm{, }\forall f \end{equation} iff \begin{equation} \sum^{N}_{i,j=1}\lambda_{ij}\nabla_jR_i=0. \end{equation} \\ \textbf{Remark 6.}\\ 1) In any space $\textbf{S}$ we define the quantity \begin{equation} \eta^{(\lambda)}:=\frac{1}{N-1}\sum^{N}_{i,j=1}\lambda_{ij}\nabla_jR_i. \end{equation} 2) If $\lambda_{ij}$ is symmetric, then \begin{equation} \Pi^{(\lambda)}f=\eta^{(\lambda)}f. \end{equation} 3) If $\lambda_{ij}=g_{ij}$ (the metric tensor), then $\eta^{(g)}:=\eta$, $\Pi^{(g)}:=\Pi$ and \begin{equation} \Pi f:=\sum^{N}_{i,j=1}g_{ij}[D_i,\nabla_j]f=\eta f, \end{equation} where \begin{equation} \eta=\frac{1}{N-1}\sum^{N}_{i,j=1}g_{ij}\nabla_jR_i. \end{equation} \\ \textbf{Theorem 34.}\\ If $\lambda_{ij}$ is any field and $\lambda^{(S)}_{ij}=\frac{1}{2}\left(\lambda_{ij}+\lambda_{ji}\right)$, then \begin{equation} \Delta^{(\lambda)}f=(N-1)\sum^{*}_{i\leq j}\lambda^{(S)}_{ij}\left(\nabla_jf\right)_{;i}+\left\langle\Delta^{(\lambda)}\overline{x},\overline{\textrm{grad}f}\right\rangle. \end{equation} \\ \textbf{Proof.}\\ Write $\lambda_{ij}=\lambda^{(S)}_{ij}+\lambda^{(A)}_{ij}$, where $\lambda^{(S)}_{ij}=\frac{1}{2}(\lambda_{ij}+\lambda_{ji})$ is the symmetric part of $\lambda_{ij}$ and $\lambda^{(A)}_{ij}=\frac{1}{2}(\lambda_{ij}-\lambda_{ji})$ is the antisymmetric part of $\lambda_{ij}$. Then we have $$ \Delta^{(\lambda)}f=\sum^{N}_{i,j=1}\lambda^{(S)}_{ij}D_i\nabla_{j}f+\sum^{N}_{i,j=1}\lambda^{(A)}_{ij}D_i\nabla_{j}f= $$ $$ =2(N-1)\sum_{i<j}\lambda^{(S)}_{ij}\nabla_i\nabla_jf+(N-1)\sum^{N}_{i=1}\lambda^{(S)}_{ii}\left(\nabla_if\right)_{;i}-2(N-1)\sum^{N}_{k=1}\sum_{i<j}\lambda^{(S)}_{ij}q_{jki}\nabla_kf+ $$ $$ +(N-1)\sum^{N}_{k,i,j=1}\lambda^{(S)}_{ij}q_{ikj}\nabla_kf-\frac{1}{N-1}\sum^{N}_{i,k=1}\lambda^{(S)}_{ik}R_i\nabla_kf- $$ $$ -(N-1)\sum^{N}_{k=1}A^{(A)}_k\nabla_kf -\frac{1}{N-1}\sum^{N}_{i,k=1}\lambda^{(A)}_{ik}R_i\nabla_kf= $$ $$ =2(N-1)\sum_{i<j}\lambda^{(S)}_{ij}\nabla_i\nabla_jf+(N-1)\sum^{N}_{i=1}\lambda^{(S)}_{ii}\left(\nabla_if\right)_{;i}- $$ $$ -(N-1)\sum^{N}_{k=1}\sum_{i>j}\lambda_{ij}\left(q_{ikj}-q_{jki}\right)\nabla_{k}f+(N-1)\sum^{N}_{k,i=1}\lambda_{ii}q_{iki}\nabla_kf- $$ $$ -\frac{1}{N-1}\sum^{N}_{k,i=1}\lambda_{ik}R_i\nabla_kf= $$ $$ =2(N-1)\sum_{i<j}\lambda^{(S)}_{ij}\nabla_i\nabla_jf+(N-1)\sum^{N}_{i=1}\lambda^{(S)}_{ii}\left(\nabla_if\right)_{;i}- $$ $$ -(N-1)\sum^{N}_{k=1}\sum_{i>j}\lambda_{ij}\left(q_{ikj}-q_{jki}\right)\nabla_{k}f+(N-1)\sum^{N}_{k,i=1}\lambda_{ii}q_{iki}\nabla_kf+ $$ $$ +\left\langle \Delta^{(\lambda)}\overline{x},\overline{\textrm{grad}f}\right\rangle-(N-1)\sum^{N}_{k,i,j=1}\lambda_{ij}q_{jki}\nabla_kf= $$ $$ =2(N-1)\sum_{i<j}\lambda^{(S)}_{ij}\nabla_i\nabla_jf+(N-1)\sum^{N}_{i=1}\lambda^{(S)}_{ii}\left(\nabla_if\right)_{;i}- $$ $$ -(N-1)\sum^{N}_{k=1}\sum_{i>j}\lambda_{ij}\left(q_{ikj}-q_{jki}\right)\nabla_{k}f+\left\langle \Delta^{(\lambda)}\overline{x},\overline{\textrm{grad}f}\right\rangle- $$ $$ -(N-1)\sum^{N}_{k=1}\sum_{i<j}\lambda_{ij}q_{jki}\nabla_kf-(N-1)\sum^{N}_{k=1}\sum_{i>j}\lambda_{ij}q_{jki}\nabla_kf= $$ $$ =2(N-1)\sum_{i<j}\lambda^{(S)}_{ij}\nabla_i\nabla_jf+(N-1)\sum^{N}_{i=1}\lambda^{(S)}_{ii}\left(\nabla_if\right)_{;i}- $$ $$ -(N-1)\sum^{N}_{k=1}\sum_{i<j}\lambda_{ji}q_{jki}\nabla_{k}f+\left\langle \Delta^{(\lambda)}\overline{x},\overline{\textrm{grad}f}\right\rangle- $$ $$ -(N-1)\sum^{N}_{k=1}\sum_{i<j}\lambda_{ij}q_{jki}\nabla_kf= $$ $$ =2(N-1)\sum_{i<j}\lambda^{(S)}_{ij}\nabla_i\nabla_jf+(N-1)\sum^{N}_{i=1}\lambda^{(S)}_{ii}\left(\nabla_if\right)_{;i}- $$ $$ -2(N-1)\sum^{N}_{k=1}\sum_{i<j}\lambda^{(S)}_{ij}q_{jki}\nabla_{k}f+\left\langle \Delta^{(\lambda)}\overline{x},\overline{\textrm{grad}f}\right\rangle= $$ $$ =(N-1)\sum^{*}_{i\leq j}\lambda^{(S)}_{ij}\left(\nabla_jf\right)_{;i}+\left\langle \Delta^{(\lambda)}\overline{x},\overline{\textrm{grad}f}\right\rangle. $$ \\ \textbf{Definition 12.}\\ We call mean curvature vector of the surface $\textbf{S}$ the vector \begin{equation} \overline{H}=\sum^{N}_{k=1}H_k\overline{e}_k, \end{equation} where \begin{equation} H_k=(N-1)\sum^{N}_{i,j=1}g_{ij}q_{ikj}. \end{equation} \\ \textbf{Theorem 35.}\\ In case $\lambda_{ij}=g_{ij}$, then we have \begin{equation} \Delta^{(g)} f=(N-1)\sum^{N}_{i,j=1}g_{ij}\nabla_i\nabla_jf+\left\langle \Delta^{(g)}\overline{x}-\overline{H},\overline{\textrm{grad}f}\right\rangle. \end{equation} \\ \textbf{Proof.}\\ We know that $(\nabla_jf)_{;i}=(\nabla_if)_{;j}$ and $\lambda_{ij}=g_{ij}$ is symmetric. Hence from Theorem 32 we have $$ \Delta^{(g)}f=2(N-1)\sum_{i<j}g_{ij}(\nabla_jf)_{;i}+(N-1)\sum^{N}_{i=1}g_{ii}(\nabla_if)_{;i}+\left\langle \Delta^{(g)}\overline{x},\overline{\textrm{grad}f}\right\rangle= $$ $$ =(N-1)\sum^{N}_{i<j}g_{ij}(\nabla_jf)_{;i}+(N-1)\sum^{N}_{i>j}g_{ij}(\nabla_jf)_{;i}+(N-1)\sum^{N}_{i=1}g_{ii}(\nabla_if)_{;i}+ $$ $$ +\left\langle \Delta^{(g)}\overline{x},\overline{\textrm{grad}f}\right\rangle= $$ $$ =(N-1)\sum^{N}_{i,j=1}g_{ij}(\nabla_jf)_{;i}+\left\langle \Delta^{(g)}\overline{x},\overline{\textrm{grad}f}\right\rangle= $$ $$ =(N-1)\sum^{N}_{i,j=1}g_{ij}\nabla_i\nabla_jf-(N-1)\sum^{N}_{i,j=1}g_{ij}\sum^{N}_{k=1}q_{jki}\nabla_kf+\left\langle \Delta^{(g)}\overline{x},\overline{\textrm{grad}f}\right\rangle= $$ $$ =(N-1)\sum^{N}_{i,j=1}g_{ij}\nabla_i\nabla_jf-\sum^{N}_{k=1}H_k\nabla_kf+\left\langle \Delta^{(g)}\overline{x},\overline{\textrm{grad}f}\right\rangle= $$ $$ =(N-1)\sum^{N}_{i,j=1}g_{ij}\nabla_i\nabla_jf+\left\langle \Delta^{(g)}\overline{x}-\overline{H},\overline{\textrm{grad}f}\right\rangle $$ and the theorem is proved.\\ \\ \textbf{Note 7.}\\ \textbf{i)} Actually the above theorem can be generalized for any $\lambda_{ij}$ as \begin{equation} \Delta^{(\lambda)}f=(N-1)\sum^{N}_{i,j=1}\lambda^{(S)}_{ij}\nabla_i\nabla_jf+\left\langle \Delta^{(\lambda)}\overline{x}-\overline{H}^{(S)},\overline{\textrm{grad}f}\right\rangle, \end{equation} where $\lambda^{(S)}_{ij}=\frac{\lambda_{ij}+\lambda_{ji}}{2}$, \begin{equation} H^{(S)}_{k}=(N-1)\sum^{N}_{i,j=1}\lambda^{(S)}_{ij}q_{ikj} \end{equation} and \begin{equation} \overline{H}^{(S)}=\sum^{N}_{k=1}H^{(S)}_k\overline{e}_k. \end{equation} \textbf{ii)} If $\lambda_{ij}$ is symmetric, then $$ \Delta^{(\lambda)}f=(N-1)\sum^{N}_{i,j=1}\lambda_{ij}\nabla_i\nabla_jf-\frac{1}{N-1}\sum^{N}_{k=1}R^{\{\lambda\}}_k\nabla_kf.\eqno{(173.1)} $$ \textbf{iii)} If in a space $\textbf{S}$ we have for $\lambda_{ij}$ symmetric $\Delta^{(\lambda)}\left(\overline{x}\right)=\overline{H}^{(\lambda)}\Leftrightarrow\overline{R}^{\{\lambda\}}=\overline{0}$, then $$ \Delta^{(\lambda)}f=(N-1)\sum^{N}_{i,j=1}\lambda_{ij}\nabla_i\nabla_jf. $$ \\ \textbf{Definition 13.}\\ We can also expand the definition of Beltrami operator acting to vectors. This can be done as follows:\\ If $\overline{A}=A_1\overline{\epsilon}_1+A_2\overline{\epsilon}_2+\ldots+A_{N}\overline{\epsilon}_{N}$, where $\{\overline{\epsilon}_i\}_{i=1,2,\ldots,N}$ is ''constant'' orthonormal base of $\textbf{E}=\textbf{R}^{N}$, then \begin{equation} \Delta_2(\overline{A})=\Delta_2(A_1)\overline{\epsilon}_1+\Delta_2(A_2)\overline{\epsilon}_2+\ldots+\Delta_2(A_{N})\overline{\epsilon}_{N}. \end{equation} \\ \textbf{Theorem 36.}\\ If $\overline{V}_1$ and $\overline{V}_2$ are vector fields, then \begin{equation} \Delta_2\left\langle \overline{V}_1,\overline{V}_2\right\rangle=\left\langle \Delta_2\overline{V}_1,\overline{V}_2\right\rangle+\left\langle \overline{V}_1,\Delta_2\overline{V}_2\right\rangle+2(N-1)\sum^{N}_{k=1}\left\langle \nabla_k\overline{V}_1,\nabla_k\overline{V}_2\right\rangle. \end{equation} \\ \textbf{Proof.}\\ The proof follows by direct use of Theorem 7 and the identity \begin{equation} \nabla_k\left\langle \overline{V}_1, \overline{V}_2\right\rangle=\left\langle\nabla_k\overline{V}_1,\overline{V}_2\right\rangle+ \left\langle \overline{V}_1,\nabla_k\overline{V}_2\right\rangle. \end{equation} \\ \textbf{Corollary 13.} \begin{equation} \left\langle \Delta_2\left(\overline{e}_k\right),\overline{e}_k \right\rangle=-(N-1)\sum^{N}_{j,l=1}q_{kjl}^2=\textrm{invariant.} \end{equation} \\ \textbf{Proof.}\\ From Theorem 36 and $$ \left\langle \overline{e}_k,\overline{e}_k \right\rangle=1, $$ we have \begin{equation} 2 \left\langle \Delta_2\left(\overline{e}_k\right),\overline{e}_k \right\rangle+2(N-1)\sum^{N}_{l=1}\left|\nabla_l\left(\overline{e}_k\right)\right|^2=0. \end{equation} Hence we get the result.\\
12,545
41,782
en
train
0.69.8
From Theorem 15 and Proposition 1 one can easily see that \begin{equation} \Delta_2(\overline{x})=\sum^{N}_{k=1}\left((N-1)\sum^{N}_{l=1}q_{lkl}-\frac{1}{N-1}R_k\right)\overline{e}_k. \end{equation} Hence if we define as ''2-mean curvature vector'' the (not to confused with $h_k$ of ($eq1$),($eq2$) in pg.17): \begin{equation} \overline{h}=\sum^{N}_{k=1}h_k\overline{e}_k, \end{equation} where \begin{equation} h_k:=(N-1)\sum^{N}_{l=1}q_{lkl}, \end{equation} then \begin{equation} h_k^{*}=\left\langle \Delta_2(\overline{x}),\overline{e}_k\right \rangle=h_k-\frac{1}{N-1}R_k \end{equation} and \begin{equation} \Delta_2\left(\overline{x}\right)=\sum^{N}_{k=1}h^{*}_k\overline{e}_k. \end{equation} Also we can write $$ \Delta_2\overline{x}=\overline{h}-\frac{1}{N-1}\overline{R} $$ and $$ \left\langle\Delta_2\overline{x},\overline{\textrm{grad}(f)}\right\rangle=\left\langle\overline{H},\overline{\textrm{grad}(f)}\right\rangle+\Delta_2f-(N-1)\sum^{N}_{k=1}\nabla_k^2f. $$ \\ \textbf{Proposition 6.}\\ The quantities $h_k=(N-1)\sum^{N}_{i=1}q_{iki}$ and $h_{ij}=\sum^{N}_{k=1}q_{ikj}^2$ are invariants.\\ \\ \textbf{Proof.}\\ It follows from invariant property of $\left\langle \nabla_l \overline{Y},\overline{e}_k\right \rangle$ for all $\overline{Y}$.\\ \\ \textbf{Proposition 7.}\\ For the mean curvature holds the following relation \begin{equation} h_k=\frac{1}{N-1}R_{k}+\left\langle\Delta_2(\overline{x}),\overline{e}_{k}\right\rangle=\textrm{invariant.} \end{equation} \\ \textbf{Definition 14.}\\ We call $\textbf{S}$ 2-minimal if $h_k=0$, $\forall k=1,2,\ldots,N$.\\ \\ \textbf{Theorem 37.}\\ The invariants $R^{\{M\}}_{j}=(N-1)\sum^{N}_{i=1}\left(\kappa^{\{M\}}_{ij}\right)^2$ have interesting properties. The sum $R^{\{M\}}_{o}=\sum^{N}_{j=1}R^{\{M\}}_j$ is also invariant and \begin{equation} \left|\nabla_k(\overline{e}_M)\right|^2=\frac{R^{\{M\}}_k}{N-1} \end{equation} and \begin{equation} \left \langle \Delta_2(\overline{e}_{M}),\overline{e}_{M}\right \rangle=-R^{\{M\}}_{o}. \end{equation} \\ \textbf{Proof.}\\ We have $$ \left\langle\nabla_l\overline{e}_k,\nabla_l\overline{e}_k\right\rangle=\sum^{N}_{s=1}q_{ksl}^2, $$ which gives (185).\\ From relations (177),(178) and (185) we get (186).\\ \\ \textbf{Definition 15.}\\ We call the $R^{\{M\}}_{o}$ as $R^{\{M\}}_{o}-$curvature.\\ \\ The extremely case $R^{\{M\}}_{o}=0$ for a certain $M$ happens if and only if $\kappa^{\{M\}}_{ij}=0$, for all $i,j=1,2,\ldots,N$. This leads from relation (10) to $\omega_{iM}=0$, for all $i=1,2,\ldots,N$ which means that $III_{M}=0$ and hence $\overline{e}_M$ is constant vector. We call such space flat in the $M$ direction.\\ \\ \textbf{Application 1.}\\ In the case $\overline{x}=\overline{e}_{N}$, then $\textbf{S}$ is a hypersphere and we have: \begin{equation} \nabla_{k}\left(\overline{e}_N\right)=\overline{e}_k=\sum^{N}_{j=1}q_{Njk}\overline{e}_j. \end{equation} Hence \begin{equation} q_{mNk}=-\delta_{mk}. \end{equation} From Theorem 36 we have $$ 0=\Delta_2\left \langle \overline{e}_N,\overline{e}_N\right\rangle=2\left\langle \Delta_2\overline{e}_N,\overline{e}_N\right\rangle+2(N-1)N. $$ Hence \begin{equation} R^{\{N\}}_{o}=-\left\langle \Delta_2\overline{e}_N,\overline{e}_N\right\rangle=N(N-1). \end{equation} Also $$ h_{N}=-(N-1)\sum^{N}_{l=1}\delta_{ll}=-N(N-1) $$ \begin{equation} h^{*}_{N}=-N(N-1)=h_N-\frac{1}{N-1}R_{N}\Rightarrow R_{N}=0 \end{equation} and \begin{equation} K^{\{N\}}=(-1)^N. \end{equation} From $$ R_{iNml}=-\sum^{N}_{s=1}rot_{lm}\left(q_{isl}q_{Nsm}\right)=\sum^{N}_{s=1}rot_{lm}\left(q_{isl}q_{sNm}\right)= $$ $$ =-\sum^{N}_{s=1}rot_{lm}\left(q_{isl}\delta_{sm}-q_{ism}\delta_{sl}\right)=-\sum^{N}_{s=1}q_{isl}\delta_{sm}+\sum^{N}_{s=1}q_{ism}\delta_{sl}= $$ \begin{equation} =-q_{iml}+q_{ilm}=q_{mil}-q_{lim}=Q_{mil}=-Q_{lim}. \end{equation} Hence we get $$ R_{NNml}=0. $$ \\ \textbf{Theorem 38.}\\ In general \begin{equation} \Delta_2 f=(N-1)\sum^{N}_{k=1}\nabla_k^2f-\frac{1}{N-1}\left\langle \overline{R},\overline{\textrm{grad}f}\right\rangle. \end{equation} 1) If $\textbf{S}$ is 2-minimal, then \begin{equation} \Delta_2\overline{x}=-\frac{1}{N-1}\overline{R}. \end{equation} 2) If $t_i$ is any vector field, then easily in general \begin{equation} \sum^{N}_{i=1}(t_{i})_{;i}=\sum^{N}_{k=1}\nabla_kt_k-\sum^{N}_{k=1}t_kh_k \end{equation} and if $\textbf{S}$ is 2-minimal, then $$ \sum^{N}_{k=1}\left(t_k\right)_{;k}=\sum^{N}_{k=1}\nabla_kt_k. $$ \\ \textbf{Note 8.}\\ Assume that (with Einstein's notation) $$ \Delta_2\Phi:=g^{lm}\left(\frac{\partial^2\Phi}{\partial x^l\partial x^m}-\Gamma^{a}_{lm}\frac{\partial \Phi}{\partial x^a}\right).\eqno{(a)} $$ Then $$ h_k=\left \langle \Delta_2(\overline{x}),\overline{e}_k\right\rangle=g^{lm}b_{kl,m}.\eqno{(b)} $$ Hence $h_k$ is invariant. We call $h_k$ mean curvature tensor.\\ \\ \textbf{Proof.}\\ We know that $$ b_{kl}=\left\langle \frac{\partial \overline{x}}{\partial x^l},\overline{e}_k\right\rangle\textrm{ and } \partial_l\overline{e}_k=\Gamma^{a}_{lk}\overline{e}_{a} $$ We differentiate the above first of two identities with respect to $x^m$ and we have $$ \frac{\partial b_{kl}}{\partial x^m}=\left\langle \partial^2_{lm}\overline{x},\overline{e}_k\right\rangle+\left\langle\partial_{l}\overline{x},\partial_m\overline{e}_k\right\rangle=\left\langle \partial^2_{lm}\overline{x},\overline{e}_k\right\rangle+\Gamma^{a}_{mk}\left\langle\partial_l\overline{x},\overline{e}_a\right\rangle. $$ Hence $$ \left\langle \partial^2_{lm}\overline{x},\overline{e}_k\right\rangle=\partial_{m}b_{kl}-\Gamma^a_{mk}b_{al} $$ But $$ h_k=\left\langle \Delta_2(\overline{x}),\overline{e}_k\right\rangle=g^{lm}\left\langle\partial^2_{lm}\overline{x},\overline{e}_k\right\rangle-g^{lm}\Gamma^a_{lm}\left\langle\partial_a\overline{x},\overline{e}_k\right\rangle= $$ $$ g^{lm}\left(\partial_mb_{kl}-\Gamma^a_{mk}b_{al}\right)-g^{lm}\Gamma^a_{lm}b_{ka}= $$ $$ =g^{lm}\left(\partial_mb_{kl}-\Gamma^a_{km}b_{al}-\Gamma^a_{lm}b_{ka}\right)=g^{lm}b_{kl,m}. $$
2,539
41,782
en
train
0.69.9
\section{The Spherical Forms} Consider also the Pfaff derivatives of a function $f$ with respect to the form $\omega_{Mm}$. It holds \begin{equation} df=\sum^{N}_{k=1}(\widetilde{\nabla}_{k}f)\omega_{Mk}. \end{equation} Assume the connections $q^{(1)}_{mMj}$ such that \begin{equation} \omega_{m}=\sum^{N}_{j=1}q^{(1)}_{mMj}\omega_{Mj}. \end{equation} Then from (10) we have \begin{equation} \omega_m=\sum^{N}_{s,j=1}q^{(1)}_{mMj}q_{Mjs}\omega_{s}. \end{equation} Hence \begin{equation} \sum^{N}_{j=1}q^{(1)}_{mMj}q_{Mjs}=\delta_{ms}, \end{equation} where $\delta_{ij}$ is the usual Kronecker symbol. Using this (196) becomes \begin{equation} df=\sum^{N}_{k=1}\widetilde{\nabla}_kf\sum^{N}_{s=1}q_{Mks}\omega_s=\sum^{N}_{s=1}\left(\sum^{N}_{k=1}(\widetilde{\nabla}_kf)q_{Mks}\right)\omega_s. \end{equation} Hence \begin{equation} \nabla_sf=\sum^{N}_{k=1}(\widetilde{\nabla}_kf)q_{Mks} \end{equation} and using (199) \begin{equation} \widetilde{\nabla}_lf=\sum^{N}_{s=1}{\nabla}_sfq^{(1)}_{sMl}. \end{equation} We set $\widetilde{q}_{mjl}$ to be the connection \begin{equation} \widetilde{\nabla}_l(\overline{e}_m)=\sum^{N}_{j=1}\widetilde{q}_{mjl}\overline{e}_j. \end{equation} One can easily see that \begin{equation} \widetilde{q}_{mjl}=\sum^{N}_{s=1}q_{mjs}q^{(1)}_{sMl}. \end{equation} From $d\overline{e}_M=\sum^{N}_{k=1}\omega_{Mk} \overline{e}_k$ we get $\widetilde{\nabla}_k(\overline{e}_M)=\overline{e}_k$ and $\left\langle\widetilde{\nabla}^2_k(\overline{e}_M),\overline{e}_M\right\rangle=\widetilde{q}_{kMk}$\\ Also if $w=\left\langle \overline{x},\overline{e}_M\right\rangle$, then $$ dw=\left\langle d\overline{x},\overline{e}_M\right\rangle+\left\langle \overline{x},d\overline{e}_M\right\rangle=\omega_{M}+\left\langle \overline{x},\sum^{N}_{k=1}\widetilde{\nabla}_k(\overline{e}_M)\omega_{Mk}\right\rangle= $$ $$ =\sum^{N}_{j=1}q^{(1)}_{MMj}\omega_{Mj}+\sum^{N}_{k=1}\left\langle \overline{x},\widetilde{\nabla}_k(\overline{e}_M)\right\rangle\omega_{Mk}= $$ $$ =\sum^{N}_{j=1}q^{(1)}_{MMj}\omega_{Mj}+\sum^{N}_{k=1}\left\langle \overline{x},\overline{e}_k\right\rangle \omega_{Mk}. $$ Hence \begin{equation} \widetilde{\nabla}_kw=q^{(1)}_{MMk}+\left\langle \overline{x},\overline{e}_k\right\rangle . \end{equation} Also $$ d\overline{x}=\sum^{N}_{k=1}(\nabla_k\overline{x})\omega_k=\sum^{N}_{k,j=1}\overline{e}_kq^{(1)}_{kMj}\omega_{Mj}=\sum^{N}_{j=1}\left(\sum^{N}_{k=1}q^{(1)}_{kMj}\overline{e}_k\right)\omega_{Mj}. $$ From this we get \begin{equation} \widetilde{\nabla}_j\overline{x}=\sum^{N}_{k=1}q^{(1)}_{kMj}\overline{e}_k\textrm{, }q^{(1)}_{kMj}\textrm{ is invariant } \end{equation} and $$ \widetilde{\nabla}_l\widetilde{\nabla}_l(w)=\widetilde{\nabla}_l\left(q^{(1)}_{MMl}\right)+\left\langle \widetilde{\nabla}_l\overline{x},\overline{e}_l\right\rangle+\left\langle \overline{x},\widetilde{\nabla}_l\overline{e}_l\right\rangle= $$ $$ =\widetilde{\nabla}_l\left(q^{(1)}_{MMl}\right)+\left\langle \sum^{N}_{k=1}q^{(1)}_{kMl}\overline{e}_k,\overline{e}_l\right\rangle+\left\langle \overline{x},\sum^{N}_{j=1}\widetilde{q}_{ljl}\overline{e}_j\right\rangle= $$ $$ =\widetilde{\nabla}_l\left(q^{(1)}_{MMl}\right)+q^{(1)}_{lMl}+\sum^{N}_{j=1}\left\langle\overline{x},\overline{e}_j\right\rangle\widetilde{q}_{ljl} $$ But $$ \Delta_2^{III_{M}}w=(N-1)\sum^{N}_{j=1}\widetilde{\nabla}^2_jw-\frac{1}{N-1}\sum^{N}_{j=1}(\widetilde{\nabla}_jw)\widetilde{R}_j= $$ \begin{equation} =(N-1)\sum^{N}_{j=1}\widetilde{\nabla}_j\left(q^{(1)}_{MMj}\right)+(N-1)\sum^{N}_{l=1}q^{(1)}_{lMl}+(N-1)\sum^{N}_{l,k=1}\left\langle \overline{x},\overline{e}_k\right\rangle \widetilde{q}_{ljl}- $$ $$ -\frac{1}{N-1}\sum^{N}_{j=1}\left\langle \overline{x},\overline{e}_j\right\rangle \widetilde{R}_j-\frac{1}{N-1}\sum^{N}_{j=1}q^{(1)}_{MMj}\widetilde{R}_j. \end{equation} Also $$ \Delta_2^{III_{M}}\overline{e}_M=(N-1)\sum^{N}_{j=1}\widetilde{\nabla}^2_j\overline{e}_M-\frac{1}{N-1}\sum^{N}_{j=1}(\widetilde{\nabla}_j\overline{e}_M)\widetilde{R}_j= $$ \begin{equation} =(N-1)\sum^{N}_{l,j=1}\widetilde{q}_{ljl}\overline{e}_j-\frac{1}{N-1}\sum^{N}_{j=1}\widetilde{R}_j\overline{e}_j \end{equation} Hence \begin{equation} \left\langle\Delta_2^{III_{M}}\overline{e}_M,\overline{x}\right\rangle=(N-1)\sum^{N}_{l,j=1}\left\langle \overline{x},\overline{e}_j \right\rangle \widetilde{q}_{ljl}-\frac{1}{N-1}\sum^{N}_{j=1}\left\langle \overline{x},\overline{e}_j \right\rangle\widetilde{R}_{j} \end{equation} From (207) and (209) we get $$ \Delta_2^{III_{M}}\left\langle \overline{x},\overline{e}_M\right\rangle-\left\langle \overline{x},\Delta_2^{III_{M}}\overline{e}_M\right\rangle=(N-1)\sum^{N}_{l=1}q^{(1)}_{lMl}+ $$ \begin{equation} +(N-1)\sum^{N}_{j=1}\widetilde{\nabla}_j\left(q^{(1)}_{MMj}\right)-\frac{1}{N-1}\sum^{N}_{j=1}q^{(1)}_{MMj}\widetilde{R}_j. \end{equation} From Theorem 36 and formula (197) we get $$ \Delta_2^{III_M}\left\langle \overline{x},\overline{e}_{M}\right\rangle=\left\langle \Delta_2^{III_M}\overline{x},\overline{e}_M\right\rangle+\left\langle \overline{x},\Delta^{III_M}_2\overline{e}_M\right\rangle+ $$ $$ +2(N-1)\sum^{N}_{k=1}\left\langle \widetilde{\nabla}_k\overline{x},\widetilde{\nabla}_k\overline{e}_M\right\rangle $$ Or using (210) $$ (N-1)\sum^{N}_{l=1}q^{(1)}_{lMl}+\sum^{N}_{j=1}\widetilde{D}_jq^{(1)}_{MMj}=\left\langle \Delta_2^{III_M}\overline{x},\overline{e}_M\right\rangle+ $$ $$ +2(N-1)\sum^{N}_{k=1}\left\langle \sum^{N}_{j=1}q^{(1)}_{jMk}\overline{e}_j,\overline{e}_k\right\rangle $$ Or $$ \sum^{N}_{j=1}\widetilde{D}_jq^{(1)}_{MMj}+(N-1)\sum^{N}_{l=1}q^{(1)}_{lMl}=\left\langle \Delta_2^{III_M}\overline{x},\overline{e}_M\right\rangle+ $$ $$ +2(N-1)\sum^{N}_{k=1} q^{(1)}_{kMk}. $$ Hence we get the next\\ \\ \textbf{Theorem 39.}\\ In $\textbf{S}$ holds \begin{equation} \left\langle\Delta_2^{III_M}\overline{x},\overline{e}_M\right\rangle=-(N-1)\sum^{N}_{l=1}q^{(1)}_{lMl}+\sum^{N}_{l=1}\widetilde{D}_lq^{(1)}_{MMl}. \end{equation} In case $\omega_{M}=0$, then the above formula becomes $$ \left\langle\Delta_2^{III_M}\overline{x},\overline{e}_M\right\rangle=-(N-1)\sum^{N}_{l=1}q^{(1)}_{lMl}.\eqno{(211.1)} $$
2,582
41,782
en
train
0.69.10
\section{General Forms and Invariants} Let $C$ be a curve (one dimensional object) of $\textbf{S}$. Let also $s$ be the physical parameter of $C$. Then \begin{equation} \overline{t}=\left(\frac{d\overline{x}}{ds}\right)_C, \end{equation} is the tangent vector of $C$ in $P\in \textbf{S}$. If we assume that $C$ lays in a hypersurface $S_{M-1}$ and we choose $\overline{n}=\overline{e}_{M}$ to be the normal vector of the tangent space of $S_{M-1}$, then \begin{equation} \left\langle \overline{t},\overline{n}\right\rangle=0. \end{equation} \\ \textbf{Definition 16.}\\ We call vertical curvature of $C\in S_{M-1}$ the quantity \begin{equation} \left(\frac{1}{\rho^{*}}\right)_C=\left\langle \frac{d\overline{t}}{ds},\overline{n}\right\rangle. \end{equation} \\ \textbf{Theorem 40.}\\ The vertical curvature $\left(\frac{1}{\rho^{*}}\right)_C$ is an invariant (by the word invariant we mean that remains unchanged in every acceptable transformation of the coordinates $u_i$).\\ \\ \textbf{Proof.}\\ This can be shown as follows. Derivate (213) to get $$ \left\langle\frac{d\overline{t}}{ds},\overline{n} \right\rangle+\left\langle \overline{t},\frac{d\overline{n}}{ds}\right\rangle=0. $$ Hence $$ \left(\frac{1}{\rho^{*}}\right)_{C}=-\left\langle \frac{d\overline{x}}{ds},\frac{d\overline{n}}{ds}\right\rangle=-\frac{II_{M}}{I}=-\frac{\sum^{N}_{k=1}\omega_k\omega_{Mk}}{\sum^{N}_{k=1}\omega_k^2}=\textrm{invariant}. $$ \\ Using relations (8),(10), we get $$ \left(\frac{1}{\rho^{*}}\right)_{C_i}=-\left\langle \left(\frac{d\overline{x}}{ds}\right)_{C_i},\left(\frac{d\overline{e}_{M}}{ds}\right)_{C_i}\right\rangle= $$ $$ =-\left\langle \sum^{N}_{k=1}\left(\frac{\omega_k}{ds}\right)_{C_i}\overline{e}_k,\sum^{N}_{l=1}\left(\frac{\omega_{Ml}}{ds}\right)_{C_i}\overline{e}_l\right\rangle =-\sum^{N}_{k,l=1}\left(\frac{\omega_k}{ds}\right)_{C_i}\left(\frac{\omega_{Ml}}{ds}\right)_{C_l}\delta_{kl}= $$ $$ =-\sum^{N}_{k=1}\left(\frac{\omega_k}{ds}\right)_{C_i}\left(\frac{\omega_{Mk}}{ds}\right)_{C_i} =-\sum^{N}_{k,m=1}\left(\frac{\omega_k}{ds}\right)_{C_i}q_{Mkm}\left(\frac{\omega_m}{ds}\right)_{C_i}. $$ Hence \begin{equation} \left(\frac{1}{\rho^{*}}\right)_{C_i}=\sum^{N}_{k,m=1}\kappa^{\{M\}}_{km}\left(\frac{\omega_k}{ds}\right)_{C_i}\left(\frac{\omega_m}{ds}\right)_{C_i}. \end{equation} Assume now, in general that exist $N-1$ curves $C_i$, $i=1,2\ldots,N-1$ passing through every point $P$ of $S_{M-1}$ and are vertical to each other. Then in all $P\in S_{M-1}$ we have \begin{equation} \left\langle \left(\frac{d\overline{x}}{ds}\right)_{C_i},\left(\frac{d\overline{x}}{ds}\right)_{C_j}\right\rangle=\delta_{ij}\textrm{, }i,j=1,2,\ldots,N-1 \end{equation} and $$ \left\langle \left(\frac{d\overline{x}}{ds}\right)_{C_i},\overline{n} \right\rangle=0. $$ Also for these curves hold \begin{equation} \left(\frac{d\overline{x}}{ds}\right)_{C_i}=\sum^{N}_{k=1}\left(\frac{\omega_k}{ds}\right)_{C_i}\overline{e}_k=\sum^{N}_{k=1}\lambda_{ik}\overline{e}_{k}, \end{equation} where we have set \begin{equation} \left(\frac{\omega_k}{ds}\right)_{C_i}=\lambda_{ik}. \end{equation} Clearly from the orthogonality of $C_i$ we have that (where we have assumed with no loss of generality that $\omega_{M}=0$ hence $\lambda_{iM}=0$): $$ \sum_{1\leq k\leq N}^{*}\lambda_{ik}\lambda_{jk}=\delta_{ij}\textrm{, }\forall i,j\in\{1,2,\ldots,N-1\}. $$ Where the asterisc in the sumation means that the value $k=M$ is omited.\\ Assuming these facts in (215), we get (using the fact that any real unitary matrix is symmetric): $$ \sum_{1\leq i\leq N}^{*}\left(\frac{1}{\rho^{*}}\right)_{C_i}=\sum^{*}_{1\leq i\leq N}\sum^{N}_{k,m=1}\kappa^{\{M\}}_{km}\lambda_{ik}\lambda_{im}=\sum^{N}_{k,m=1}\kappa^{\{M\}}_{km}\delta_{km}= $$ $$ =\sum^{N}_{k=1}\kappa^{\{M\}}_{kk}=\textrm{invariant}. $$ From the above we get the next\\ \\ \textbf{Theorem 41.}\\ From any point $P$ in a hypersurface $S_{M-1}$ of the space $\textbf{S}$, there pass $N-1$ vertical curves $C_i,i=1,2,\ldots,N-1$ and their vertical curvatures satisfy \begin{equation} \sum_{1\leq i\leq N}^{*}\left(\frac{1}{\rho^{*}}\right)_{C_i}=\frac{h_M}{N-1}. \end{equation} \textbf{Remark.} We mention here that with the word invariant we mean any quantity that remains unchanged in every acceptable choose of parameters $\{u_i\}_{i=1,2,\ldots,N}$ as also change of position and rotation of $\textbf{S}$.\\ \\
1,694
41,782
en
train
0.69.11
Set \begin{equation} T=\sum^{N}_{i=1}A_i\omega_{i}^2+\sum^{N}_{i,j=1}B_{ij}\omega_{ij}^2+\sum^{N}_{i,j,k=1}C_{ijk}\omega_{i}\omega_{jk}+\sum^{N}_{i,j,f,m=1}E_{ijfm}\omega_{ij}\omega_{fm} \end{equation} and assume the $N-1$ orthogonal curves $C_i$. If the direction of $C_i$ is \begin{equation} d_i=\left(\frac{T}{(ds)^2}\right)_{C_i}\textrm{, }i=1,2,\ldots,N-1\textrm{ and }d_{N}=0, \end{equation} then summing in all $N$ directions we get, after using the orthogonality and making simplifications $$ \sum^{N}_{i=1}d_i=\sum^{N}_{i=1}A_i+\sum^{N}_{i,j,m=1}B_{ij}q_{ijm}^2+\sum^{N}_{i,j,k=1}C_{ijk}q_{jki}+ $$ \begin{equation} +\sum^{N}_{i,j,f,m,n=1}E_{ijfmn}q_{ijn}q_{fmn} \end{equation} As application we get that \begin{equation} T_0=AI+BII+CIII=A\sum^{N}_{i=1}\omega_{i}^2+B\sum^{N}_{i=1}\omega_{Ni}^2+C\sum^{N}_{i=1}\omega_{i}\omega_{Ni}=\textrm{invariant}. \end{equation} Then summing the vertical directions we get \begin{equation} \sum^{N-1}_{i=1}d_i=A+B\sum^{N}_{i,j=1}q_{iNj}^2+C\sum^{N}_{i=1}q_{iNi}=\textrm{invariant} \end{equation} For every choice of costants $A,B,C$.\\ \\ \textbf{Definition 17.}\\ Let now $C$ be a space curve and $\overline{t}_i$, $i=1,2,\ldots,N$ is a family of orthonormal vectors of $C$. We define \begin{equation} \left(\frac{1}{\rho_{ik}}\right)_{C}:=\left\langle\left( \frac{d\overline{t}_i}{ds}\right)_C,\overline{t}_k\right\rangle\textrm{, }\forall i,k\in\{1,2,\ldots,N\} \end{equation} and call $\left(\frac{1}{\rho_{ik}}\right)_C$ as ''$ik-$curvature'' of $C$ and \begin{equation} \frac{d\overline{t}_i}{ds}=\sum^{N}_{k=1}\left(\frac{1}{\rho_{ik}}\right)_C\overline{t}_k\textrm{, }\forall i=1,2,\ldots,N. \end{equation} \\ \textbf{Theorem 42.}\\ The $\left (\frac{1}{\rho_{ij}}\right)_C-$curvatures are semi-invariants of the space. Moreover \begin{equation} \left(\frac{1}{\rho_{ij}}\right)_C=-\left(\frac{1}{\rho_{ji}}\right)_C \end{equation} and if $$ \overline{t}_i=\sum^{N}_{k=1}A_{ki}\overline{e}_k, $$ then \begin{equation} \left(\frac{1}{\rho_{ij}}\right)_C=\sum^{N}_{l=1}\sum^{N}_{m=1}A_{\{m\}i;l}A_{mj}\left(\frac{\omega_l}{ds}\right)_{C}, \end{equation} where $A_{\{m\}i;l}=\nabla_lA_{mi}-\sum^{N}_{k=1}q_{mkl}A_{ki}$.\\ \\ \textbf{Proof.}\\ We express $\overline{t}_i$ in the base $\overline{e}_k$ and differentiate with respect to the canonical parameter $s$ of $C$, hence $$ \overline{t}_i=\sum^{N}_{k=1}A_{ki}\overline{e}_k, $$ where $A_{ki}$ is unitary matrix i.e. $$ \sum^{N}_{i=1}A_{ki}A_{li}=\delta_{kl}. $$ Hence \begin{equation} \overline{e}_j=\sum^{N}_{k=1}A^{T}_{kj}\overline{t}_k, \end{equation} where $A^{T}$ is the symmetric of $A$.\\ Differentiating with respect to $s$ of $\overline{t}_i$, we get $$ \frac{d\overline{t}_i}{ds}=\sum^{N}_{k=1}\frac{dA_{ki}}{ds}\overline{e}_k+\sum^{N}_{k=1}A_{ki}\frac{d\overline{e}_k}{ds}= $$ $$ =\sum^{N}_{k=1}\frac{dA_{ki}}{ds}\overline{e}_k+\sum^{N}_{k=1}A_{ki}\sum^{N}_{l=1}\nabla_l\overline{e}_k\frac{\omega_l}{ds}= $$ $$ =\sum^{N}_{k=1}\frac{dA_{ki}}{ds}\overline{e}_k+\sum^{N}_{k,l,m=1}A_{ki}q_{kml}\frac{\omega_l}{ds}\overline{e}_m = $$ $$ =\sum^{N}_{k,l=1}\frac{(\nabla_lA_{ki})\omega_l}{ds}\overline{e}_k+\sum^{N}_{k,l,m=1}A_{ki}q_{kml}\frac{\omega_l}{ds}\overline{e}_m= $$ $$ =\sum^{N}_{m,l=1}\frac{(\nabla_lA_{mi})\omega_l}{ds}\overline{e}_m+\sum^{N}_{k,l,m=1}A_{ki}q_{kml}\frac{\omega_l}{ds}\overline{e}_m. $$ Hence we can write \begin{equation} c_{im}:=\left\langle\frac{d\overline{t}_i}{ds},\overline{e}_m\right\rangle=\sum^{N}_{l=1}\left(\nabla_lA_{mi}-\sum^{N}_{k=1}q_{mkl}A_{ki}\right)\frac{\omega_l}{ds}=\sum^{N}_{l=1}A_{\{m\}i;l}\frac{\omega_l}{ds}. \end{equation} Hence \begin{equation} c_{im}=\sum^{N}_{l=1}\nabla_lA_{mi}\left(\frac{\omega_l}{ds}\right)_C-\sum^{N}_{l,k=1}A_{ki}q_{lkm}\left(\frac{\omega_l}{ds}\right)_C. \end{equation} Hence \begin{equation} \frac{dA_{mi}}{ds}=\left\langle\frac{d\overline{t}_i}{ds},\overline{e}_m\right\rangle+\sum^{N}_{l,k=1}A_{ki}q_{lkm}\left(\frac{\omega_l}{ds}\right)_C \end{equation} Also $$ \left\langle \overline{t}_i,\overline{t}_j\right\rangle=\delta_{ij}\Rightarrow \left\langle \frac{d\overline{t}_i}{ds},\overline{t}_j\right\rangle+\left\langle \overline{t}_i,\frac{d\overline{t}_j}{ds}\right\rangle=0. $$ Hence \begin{equation} \left(\frac{1}{\rho_{ij}}\right)_{C}=-\left(\frac{1}{\rho_{ji}}\right)_{C}. \end{equation} From the orthonormality of $A_{ij}$ and $A^{T}_{ij}=A_{ji}$ we have \begin{equation} \sum^{N}_{i=1}A_{mi}A_{ji}=\delta_{mj}\Leftrightarrow \sum^{N}_{m=1}A_{im}A_{jm}=\delta_{ij}. \end{equation} Differentiating we get \begin{equation} \sum^{N}_{i=1}A_{\{m\}i;l}A_{ji}+\sum^{N}_{i=1}A_{mi}A_{\{j\}i;l}=0 \end{equation} Moreover \begin{equation} \frac{d\overline{t}_i}{ds}=\sum^{N}_{m=1}c_{im}\overline{e}_m=\sum^{N}_{j=1}\sum^{N}_{l,m=1}A_{\{m\}i;l}A_{mj}\left(\frac{\omega_l}{ds}\right)_C\overline{t}_j \end{equation} and the curvatures will be \begin{equation} \left(\frac{1}{\rho_{ij}}\right)_C=\sum^{N}_{l=1}\sum^{N}_{m=1}A_{\{m\}i;l}A_{mj}\left(\frac{\omega_{l}}{ds}\right)_{C}. \end{equation} Hence $$ \sum^{N}_{j=1}A_{pj}\left(\frac{1}{\rho_{ij}}\right)_C=\sum^{N}_{l=1}\sum^{N}_{m,j=1}A_{\{m\}i;l}A_{mj}A_{pj}\left(\frac{\omega_{l}}{ds}\right)_{C}= $$ $$ =\sum^{N}_{l=1}\sum^{N}_{m=1}A_{\{m\}i;l}\delta_{mp}\left(\frac{\omega_{l}}{ds}\right)_{C}=\sum^{N}_{l=1}A_{\{p\}i;l}\left(\frac{\omega_{l}}{ds}\right)_{C}=c_{ip}. $$ This lead us to write $$ \sum^{N}_{i,j=1}A_{ni}A_{pj}\left(\frac{1}{\rho_{ij}}\right)_C=\sum^{N}_{i=1}A_{ni}c_{ip}=\sum^{N}_{l,i=1}A_{\{p\}i;l}A_{ni}\left(\frac{\omega_l}{ds}\right)_{C}= $$ $$ =\sum^{N}_{l,i=1}A^{T}_{\{i\}p;l}A^{T}_{in}\left(\frac{\omega_l}{ds}\right)_C=\sum^{N}_{l,m=1}A^{T}_{\{m\}p;l}A^{T}_{mn}\left(\frac{\omega_l}{ds}\right)_C= $$ $$ =\left(\sum^{N}_{l,m=1}A_{\{m\}n;l}A_{mp}\left(\frac{\omega_l}{ds}\right)_C\right)^T=\left(\frac{1}{\rho_{np}}\right)^{T}_C= $$ $$ =\left(\sum^{N}_{l,m=1}\left(\nabla_lA_{\{m\}n}A_{mp}-\sum^{N}_{k=1}q_{mkl}A_{kn}A_{mp}\right)\right)^T\left(\frac{\omega_l}{ds}\right)_C= $$ $$ =\sum^{N}_{l,m=1}\left(\nabla_lA_{\{m\}p}A_{mn}-\sum^{N}_{k=1}q_{mkl}A_{kp}A_{mn}\right)\left(\frac{\omega_l}{ds}\right)_C= $$ $$ =\sum^{N}_{l,m=1}A_{\{m\}p;l}A_{mn}\left(\frac{\omega_l}{ds}\right)_C= $$ $$ =\left(\frac{1}{\rho_{pn}}\right)_C=-\left(\frac{1}{\rho_{np}}\right)_C. $$ Since we have $$ \left(A^{T}_{\{i\}j}\right)_{;l}=\nabla_l A_{ji}-\sum^{N}_{k=1}q_{jkl}A_{ki} $$ and $$ \left(A_{\{i\}j;l}\right)^{T}=\left(\nabla_lA_{ij}-\sum^{N}_{k=1}q_{ikl}A_{kj}\right)^{T}=\nabla_lA_{ji}-\sum^{N}_{k=1}q_{jkl}A_{ki} $$ and \begin{equation} \left(A^{T}_{\{i\}j}\right)_{;l}=\left(A_{\{i\}j;l}\right)^{T}. \end{equation} \[ \] \centerline{\bf References} \[ \] [1]: Nirmala Prakash. ''Differential Geometry An Integrated Approach''. Tata McGraw-Hill Publishing Company Limited. New Delhi. 1981.\\ [2]: Bo-Yu Hou, Bo-Yuan Hou. ''Differential Geometry for Physicists''. World Scientific. Singapore, New Jersey, London, Hong Kong. 1997.\\ [3]: N.K. Stephanidis. ''Differential Geometry''. Vol. I. Zitis Pub. Thessaloniki, Greece. 1995.\\ [4]: N.K. Stephanidis. ''Differential Geometry''. Vol. II. Zitis Pub. Thessaloniki, Greece. 1987.\\ [5]: D. Dimitropoulou-Psomopoulou. ''Calculus of Differential Forms''. 2nd edition. Zitis Pub. Thessaloniki, Greece. 1993.\\ [6]: E.A. Iliopoulou, F. Gouli-Andreou. ''Introduction to Riemann Geometry''. Zitis Pub. Thessaloniki, Greece. 1985.\\ [7]: N.K. Spyrou. ''Introduction to the General Theory of Ralativity''. Gartaganis Pub. Thessaloniki, Greece. 1989.\\ \end{document}
3,481
41,782
en
train
0.70.0
\begin{document} \title{Generating Lode Runner Levels by Learning Player Paths with LSTMs} \author{Kynan Sorochan} \affiliation{ \institution{University of Alberta} \city{Edmonton} \country{Canada}} \email{[email protected]} \author{Jerry Chen} \affiliation{ \institution{University of Alberta} \city{Edmonton} \country{Canada}} \email{[email protected]} \author{Yakun Yu} \affiliation{ \institution{University of Alberta} \city{Edmonton} \country{Canada}} \email{[email protected]} \author{Matthew Guzdial} \affiliation{ \institution{University of Alberta} \city{Edmonton} \country{Canada}} \email{[email protected]} \renewcommand{Sorochan et al.}{Sorochan et al.} \begin{abstract} Machine learning has been a popular tool in many different fields, including procedural content generation. However, procedural content generation via machine learning (PCGML) approaches can struggle with controllability and coherence. In this paper, we attempt to address these problems by learning to generate human-like paths, and then generating levels based on these paths. We extract player path data from gameplay video, train an LSTM to generate new paths based on this data, and then generate game levels based on this path data. We demonstrate that our approach leads to more coherent levels for the game Lode Runner in comparison to an existing PCGML approach. \end{abstract} \begin{CCSXML} <ccs2012> <concept> <concept_id>10010147.10010257.10010293</concept_id> <concept_desc>Computing methodologies~Machine learning approaches</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010257.10010293.10010294</concept_id> <concept_desc>Computing methodologies~Neural networks</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> \end{CCSXML} \ccsdesc[500]{Computing methodologies~Machine learning approaches} \ccsdesc[500]{Computing methodologies~Neural networks} \keywords{datasets, neural networks, path learning, path detection} \maketitle \section{Introduction} Procedural content generation via machine learning (PCGML) is the study and application of machine learning to procedurally generating content, particularly for games \cite{summerville2018procedural}. While PCGML has had enjoyed considerable popularity recently, a number of open problems exist. Particularly in comparison to traditional, non-ML PCG, PCGML approaches struggle with controllability and coherence. We define controllability as the ability for a user to impact particular attributes of the generated content. In traditional PCG, since the system is authored by a human, there are a number of strategies to allow a user to impact the output or enforce particular constraints \cite{smith2012case,liapis2016mixed,horswill2019imaginarium}. There are efforts to make PCGML approaches controllable, but this is still an under-explored problem \cite{mott2019controllable,sarkar2020controllable,cheng2020automatic,chen2020image}. In particular, we identify a lack of focus on approaches that allow users to specify high-level, intuitive constraints as the input to an ML generator that then outputs game content that matches those constraints. By coherence we indicate the problem of game content demonstrating global coherence, global structure that fits human understanding of that game content. Global structure includes a wide range of constraints, and is dependent on the particular game content in question. For example, playability in game levels, the ability to complete said level, is an example of global structure that we expect from human-authored game levels. However, a level being playable is not the only element of global structure that we expect. A completely flat platformer game level would be playable, but would violate other elements of global structure. Modeling global structure is a common problem in machine learning generally \cite{moon2019unified}. In PCGML, there have been attempts to model global structure, with the most common approach being to model the player's path through a game level \cite{summerville2016super,sarkar2020exploring}. However, this area is also under-explored. In this paper, we investigate a novel PCGML approach that attempts to address these two open problems: controllability and coherence. Specifically, we introduce an approach to generate Lode Runner levels based on specified paths. Our focus on coherence will be in reference to Lode Runner Levels. We model these paths to ensure that they are human-like with a Long Short-Term Memory Recurrent Neural Network (LSTM RNN or LSTM). For training data we extract real human paths on existing Lode Runner levels from gameplay video. We then use the LSTM to generate novel human-like paths and employ a Markov Chain to generate novel levels based on these generated paths. We employ a Markov Chain for this initial investigation as it represents a simple ML model that typically struggles to capture global structure. Therefore, it's an ideal choice to investigate whether this approach improves level coherence. Our approach is inherently controllable as we can input arbitrary paths, though we focus on our generated, human-like paths in this paper. We acknowledge that we won't be directly evaluating the controllability of this approach, but still contend that it is controllable. In this paper, we first introduce related prior work. We then overview our generator, from data extraction to the novel generation of Lode Runner levels. We compare the performance of our generator to an existing Markov Chain generator without path data to evaluate whether we demonstrate improved coherence \cite{snodgrass2015hierarchical}. We then present a secondary evaluation of our approach in comparison to the original Lode Runner levels. We end with a discussion of our limitations and future work. \section{Related Works} In this section we overview work in terms of prior PCGML approaches to generate Loderunner levels, controllability in PCGML, and coherence via paths in PCGML. Many PCGML approaches have been applied to level generation for Lode Runner in recent years. Thakkar et al. proposed the use of a variational autoencoder to generate new levels for Lode Runner based on a binary encoding of each character in the original levels. They attempted to improve the playability, one aspect of global structure, by searching the learned, latent space with an evolutionary algorithm \cite{thakkar2019autoencoder}. We also attempt to increase playability, but based on altering the input to the generation pipeline instead of including search-based PCG within the pipeline. Snodgrass and Ontan{\'o}n made use of Markov models to generate content for many games including Lode Runner \cite{snodgrass2016learning}, we include this approach as a baseline as we also make use of a Markov chain for our generator. Markov chains have been a common method for PCGML since its inception \cite{snodgrass2013generating}. Much of this prior work has focused on \emph{Super Mario Bros.} level generation, a common area of PCGML research \cite{snodgrass2013generating,snodgrass2014hierarchical,Summerville2015MCMCTSP4,snodgrass2015hierarchical}. We also employ Markov chains, as they typically struggle with global coherence in comparison to other methods \cite{guzdial2018co}. Many approaches have been made to attempt to improve the coherence of PCGML output \cite{thakkar2019autoencoder}. Of particular interest to us are approaches that attempt to do this by invoking some representation of the player path \cite{sarkar2020exploring}. Summerville et al. trained an LSTM on Super Mario Bros. levels that included representations of potential player paths \cite{summerville2016super}. Follow up work by Summerville et al. extracted player paths from gameplay video and found that these led to significantly different output levels when they were used to train LSTMs \cite{summerville2016learning}. We use a similar method to extract player paths. The major difference between Summerville et al.'s approach and ours is that they alter the level representation to include the path information. Instead, we generate novel paths and use these as input for a PCGML level generator. The Summerville et al. generator could not take a specified path as input without modification. This and much of the prior work mentioned above is based on the representations from the Video Game Level Corpus (VGLC) \cite{VGLC}, which we draw on for our level representation.
2,334
11,913
en
train
0.70.1
\section{Related Works} In this section we overview work in terms of prior PCGML approaches to generate Loderunner levels, controllability in PCGML, and coherence via paths in PCGML. Many PCGML approaches have been applied to level generation for Lode Runner in recent years. Thakkar et al. proposed the use of a variational autoencoder to generate new levels for Lode Runner based on a binary encoding of each character in the original levels. They attempted to improve the playability, one aspect of global structure, by searching the learned, latent space with an evolutionary algorithm \cite{thakkar2019autoencoder}. We also attempt to increase playability, but based on altering the input to the generation pipeline instead of including search-based PCG within the pipeline. Snodgrass and Ontan{\'o}n made use of Markov models to generate content for many games including Lode Runner \cite{snodgrass2016learning}, we include this approach as a baseline as we also make use of a Markov chain for our generator. Markov chains have been a common method for PCGML since its inception \cite{snodgrass2013generating}. Much of this prior work has focused on \emph{Super Mario Bros.} level generation, a common area of PCGML research \cite{snodgrass2013generating,snodgrass2014hierarchical,Summerville2015MCMCTSP4,snodgrass2015hierarchical}. We also employ Markov chains, as they typically struggle with global coherence in comparison to other methods \cite{guzdial2018co}. Many approaches have been made to attempt to improve the coherence of PCGML output \cite{thakkar2019autoencoder}. Of particular interest to us are approaches that attempt to do this by invoking some representation of the player path \cite{sarkar2020exploring}. Summerville et al. trained an LSTM on Super Mario Bros. levels that included representations of potential player paths \cite{summerville2016super}. Follow up work by Summerville et al. extracted player paths from gameplay video and found that these led to significantly different output levels when they were used to train LSTMs \cite{summerville2016learning}. We use a similar method to extract player paths. The major difference between Summerville et al.'s approach and ours is that they alter the level representation to include the path information. Instead, we generate novel paths and use these as input for a PCGML level generator. The Summerville et al. generator could not take a specified path as input without modification. This and much of the prior work mentioned above is based on the representations from the Video Game Level Corpus (VGLC) \cite{VGLC}, which we draw on for our level representation. \section{System Overview} We aim to show the benefits of human-like paths to improve global coherence in PCGML generators. Our approach can be divided into several steps: (1) extracting the paths from gameplay videos, (2) training an LSTM on the path data, (3) training our Markov chain on the original Lode Runner levels, and (4) generating a new level from a generated path. \subsection{Data extraction} Our goal for this first step is to extract human paths for solving the original Lode Runner levels, which we extract from gameplay video. We focus on human paths instead of paths generated by an automated level playing agent as in prior work \cite{summerville2016super}. We made this choice as prior work found that human paths led to different output levels than automated paths \cite{summerville2016super}. Based on this prior work, we make the assumption that automated paths would lead to levels that were more sparse, but more human-like paths will lead to paths closer to the original game. We leave a verification of this assumption for future work. We download a series of videos from YouTube, based on similar approaches in prior work \cite{guzdial2016toward,summerville2016super}, Once we have the videos, we extract the frames and track the location of the player in each frame for each level, based on the same approach used in the above prior work. We tag each location with the type of movement using OpenCV and pattern matching \cite{bradski2000opencv}. We do this by hand tagging a series of images or sprites representing the different actions of the player in Lode Runner. In each frame, we identify the player's action or type of movement based on the image with the highest probability. We map the player's location in the frame to a 32 by 22 grid, which is the size of the VGLC dataset for each level. This gives us a sequence of 32 by 22 grids equal in length to the amount of time the player played a given level. After obtaining this sequence we can identify one out of five possible actions for each grid (moving left as 'l', moving right as 'r', climbing up 'u', climbing down 'c', and falling down as 'f') based on the location change of the player between pairs of grids. This allows us to store a player's path for each level as a one-dimensional sequence. \subsection{Path generation} We could have simply reused the extracted player paths as the input for a Lode Runner level generation process. However, this would have limited the number of levels our system could generate. As such, we need some way to generate new human-like paths. Our extracted human paths vary significantly both in terms of length and patterns.\cite{github} To address this, we first split the sequences of actions into small chunks of a fixed size (50). We represent each action as a one-hot encoding of length five for the five actions. Every action becomes a vector of length five, with every action type represented as an index in that vector. An action is represented by a 1 at its index if it occurred at this position in the sequence and a 0 otherwise. Thus our data becomes a series of 50x5 matrices. We employ a Long Short-Term Memory Recurrent Neural Network (LSTM) for our path modeling, as they have been demonstrated to work well on PCGML tasks with sequence-like data \cite{summerville2016learning}. An LSTM is designed to better learn long-term dependencies than standard recurrent neural networks. This is important for our use case as the human paths tended to have many repeated actions in a row, and we didn't want our model to learn to just repeat the same action endlessly. We employ an LSTM-based Seq2Seq model that is built to generate new paths. Our input is one 50-length path sequence and the expected output is the next 50-length path sequence. The model is composed of 2 layers with 512 LSTM cells each. We employ dropout and gradient clipping to prevent overfitting and gradient explosion, respectively. The final layer is a fully connected layer of length 50 with softmax activation, to better represent the probability distribution of a single character at each time step. We trained our model for 60 epochs with a 0.001 learning rate and the adam optimizer\cite{kingma2017adam}. We probabilistically sample each output action based on treating the softmax activation as a probability distribution. We note that despite using 50 inputs and 50 outputs this model can be used to create paths of arbitrary lengths by inputting empty (all 0s) actions initially, and then continually generating based on previously generated outputs. \subsection{Level Structure Learning} The path information we generate only partially defines a level. The same level path could be associated with a large, but not infinite number of levels. This is particularly true in Lode Runner, where the player can directly ``make'' paths through their own actions (e.g. digging holes). This means that we still need more information about what kind of tiles can be associated with what action, and how to fill out the rest of the level ``away'' from the player path. Given our interest in demonstrating the impact of player paths in improving global coherency, we employ a multi-dimensional Markov chain \cite{snodgrass2014experiments}. In our multi-dimensional Markov chain each tile value depends on the tiles to its left and directly below, along with the player action at the current position, to the right, and above. We note that prior examples of platformer Markov chains have made use of nodes with 3 dependencies: the left, below, and to the left and below. We experimented with this model, but found that our 2-tile dependency sufficiently modeled level structure and led to increased diversity in the output. We train our multi-dimensional Markov chain on the Lode Runner levels from the VGLC \cite{VGLC}, with the added information from our extracted paths in the grid-based representation discussed above. We also record a series of statistics in terms of the number of enemies and gold pieces in the training levels. For each original level, we found the ratio of the numbers of enemies and gold pieces to the associated player path lengths. We represent these two distributions as two Gaussian distributions. This allows us to sample from these two distributions and derive an overall number of desired gold pieces and enemies for a new level. \subsection{Complete Level Generation} Our level generation process begins by generating a new path. From this path we can determine the minimum size of a level necessary to contain this path. We instantiate this level as an empty grid of the minimum size. We label every visited tile in this level with the action that the player path indicates, and the remaining tiles with a special token that indicates no actions. We automatically constrain some tile values based on the action type, based on the existing keys in the Markov Chain. For example, if the action taken at a tile is to move up, then we know that this tile must contain a ladder. Or if the action at a tile is to move to the left, then this tile can be empty, have a brick (as the player may break it from above and fall to it, then move left), or a rope. This gives us some high-level constraints on the tiles and a basic initial structure to work with. We apply our Markov chain to fill the level out with the final tile types from the partially specified state. We start from the bottom left corner and move along each row from left to right until we've reached the top of the screen. If a tile has been specified we do not need to generate a new tile at this location, and we just move on. If the tile has been partially specified then we remove the tile possibilities that have already been ruled out, and then probabilistically sample from the remaining options based on the learned probability distribution in the Markov chain. If the tile has not been specified at all, then we simply probabilistically sample from the Markov chain as normal \cite{snodgrass2014experiments}. In the event that a key does not exist, we remove all dependencies except for the left and below dependency and then sample from this simplified distribution. We made this choice as this removes all path requirements and only considers the structure of the level. The final step of our generation process is to place all enemy and gold tiles. We randomly sample from both of the Gaussian distributions we described above. We multiply the sampled values by the generated path length to get the final numbers of enemies and gold pieces. We then randomly place these elements along the player's path, as they are elements that the player would have to avoid or seek out, respectively. We made this choice as the Lode Runner levels in the VGLC only place enemies and gold pieces in place of empty tiles, but both entities can move. Further, the Markov chain struggled to learn to place these elements effectively given how rare they are in comparison to the other tile types. \section{Quantitative Evaluation} We extracted 66 player paths for 66 game levels from a 4.5-hour long gameplay video. After training on these sequences, we were able to generate an arbitrary number of levels. Figure \ref{fig:good1} gives an example of a generated level based on our approach. This research seeks to determine whether adding a player path as input to a PCGML generator improves global coherence. We employ Markov chains, a PCGML approach that tends to struggle with global coherence in order to better understand the impact of player paths \cite{guzdial2018co}. As such, the natural choice for an initial evaluation is investigating how the inclusion of a generated path as input changes the generated levels in comparison to a Markov chain approach without player path information. We therefore compare against the work of Snodgrass and Ontan{\'o}n, who employed a Markov chain without path information to generate Lode Runner levels \cite{snodgrass2016learning}. We employed an A* pathfinding agent to test how a player might play the two different sets of Lode Runner levels. \cite{github} Ideally, we might have used a human subject study to compare the two types of generated levels. However, we lacked the time and resources for a human subject study, and so use this method for an initial comparison. The pathfinder starts from the player tile present in both sets of generated levels and attempts to pathfind to each gold piece in turn. Lode Runner is a complex game, which allows players to momentarily trap enemies, as such, we ignore enemies while pathfinding. This decision was also motivated by the fact that we lacked a simulator to fully simulate enemy movement in the game, meaning that the enemies would just be treated as impassable obstacles otherwise, which would not be appropriate. We note that prior work did not make this assumption, and so they reported much lower percentages of playable outputs \cite{snodgrass2016learning}. However, since this assumption is made for both types of levels it's still helpful for comparison purposes. The pathfinder tracks the number of nodes explored on the way to each gold as well as each gold that it was able to reach. It reports the total number of each of those two metrics which we use to compare between the two types of levels. Ideally, an A* agent should be able to reach each gold piece from the starting location in a playable Lode Runner level. We had the A* agent report the number of nodes explored on the way to each gold piece as a measure of coherence. All of the existing Lode Runner levels have clear paths to access each gold piece, essentially acting as puzzles for the player to solve. Therefore, we take a low number of nodes explore as an indirect measure of global coherence. We employ the following metrics in this comparative evaluation, reporting the average and standard deviation of each metric across the generated levels: \begin{itemize} \item \textit{Gold Total Per Level} - This is the average total number of gold pieces the A* agent needed to find. This is used in determining the percentage of gold collected in a level, as well as the potential difficulty and length of a level. The more gold pieces the more potential for difficulty depending on how they are distributed throughout the level. It also coincides with how long a level may take to play. The more gold, the longer it could take for a player to collect each piece. \item \textit{Percentage Collected Per Level} - This metric gives how many of the gold pieces could be collected in a level. Values closer to 100\% indicate more playable levels overall. A higher average indicates that more of the levels are playable. \item \textit{Total Nodes Explored} - This metric is the number of nodes the A* agent needed in total to reach all of the reachable gold pieces. We do not include nodes explored when the pathfinder attempted to reach unreachable gold pieces. This number can be taken as an approximation for the minimum amount of time a player would need to complete a level. The higher the number the longer the time needed to complete the level. \item \textit{Nodes Per Gold} - Since levels do not all have the same number of gold pieces, the total nodes explored metric on it's own could potential leave an inaccurate reflection of each level. As such, this metric gives the average number of nodes needed to reach each reachable piece of gold per level. The same rule applies as with the above metric, where the larger the number the longer it potentially could take to reach each gold piece. \end{itemize} \noindent These metrics allow us to compare between our two sets of generated levels. We are particularly interested in the second metric as a measure of playability and the fourth metric as a measure of global coherence. We report the other two metrics for context to these two metrics. If the Snodgrass and Ontan{\'o}n levels outperform our levels in terms of playability, that could indicate that our method for placing gold pieces along the player path is flawed. If the Snodgrass and Ontan{\'o}n levels outperform our levels or perform similarly in terms of nodes per gold metric, this would indicate that our inclusion of the player path did not improve global coherence, and led to similarly coherent/incoherent levels as a simpler Markov chain approach.
4,047
11,913
en
train
0.70.2
\section{Quantitative Evaluation} We extracted 66 player paths for 66 game levels from a 4.5-hour long gameplay video. After training on these sequences, we were able to generate an arbitrary number of levels. Figure \ref{fig:good1} gives an example of a generated level based on our approach. This research seeks to determine whether adding a player path as input to a PCGML generator improves global coherence. We employ Markov chains, a PCGML approach that tends to struggle with global coherence in order to better understand the impact of player paths \cite{guzdial2018co}. As such, the natural choice for an initial evaluation is investigating how the inclusion of a generated path as input changes the generated levels in comparison to a Markov chain approach without player path information. We therefore compare against the work of Snodgrass and Ontan{\'o}n, who employed a Markov chain without path information to generate Lode Runner levels \cite{snodgrass2016learning}. We employed an A* pathfinding agent to test how a player might play the two different sets of Lode Runner levels. \cite{github} Ideally, we might have used a human subject study to compare the two types of generated levels. However, we lacked the time and resources for a human subject study, and so use this method for an initial comparison. The pathfinder starts from the player tile present in both sets of generated levels and attempts to pathfind to each gold piece in turn. Lode Runner is a complex game, which allows players to momentarily trap enemies, as such, we ignore enemies while pathfinding. This decision was also motivated by the fact that we lacked a simulator to fully simulate enemy movement in the game, meaning that the enemies would just be treated as impassable obstacles otherwise, which would not be appropriate. We note that prior work did not make this assumption, and so they reported much lower percentages of playable outputs \cite{snodgrass2016learning}. However, since this assumption is made for both types of levels it's still helpful for comparison purposes. The pathfinder tracks the number of nodes explored on the way to each gold as well as each gold that it was able to reach. It reports the total number of each of those two metrics which we use to compare between the two types of levels. Ideally, an A* agent should be able to reach each gold piece from the starting location in a playable Lode Runner level. We had the A* agent report the number of nodes explored on the way to each gold piece as a measure of coherence. All of the existing Lode Runner levels have clear paths to access each gold piece, essentially acting as puzzles for the player to solve. Therefore, we take a low number of nodes explore as an indirect measure of global coherence. We employ the following metrics in this comparative evaluation, reporting the average and standard deviation of each metric across the generated levels: \begin{itemize} \item \textit{Gold Total Per Level} - This is the average total number of gold pieces the A* agent needed to find. This is used in determining the percentage of gold collected in a level, as well as the potential difficulty and length of a level. The more gold pieces the more potential for difficulty depending on how they are distributed throughout the level. It also coincides with how long a level may take to play. The more gold, the longer it could take for a player to collect each piece. \item \textit{Percentage Collected Per Level} - This metric gives how many of the gold pieces could be collected in a level. Values closer to 100\% indicate more playable levels overall. A higher average indicates that more of the levels are playable. \item \textit{Total Nodes Explored} - This metric is the number of nodes the A* agent needed in total to reach all of the reachable gold pieces. We do not include nodes explored when the pathfinder attempted to reach unreachable gold pieces. This number can be taken as an approximation for the minimum amount of time a player would need to complete a level. The higher the number the longer the time needed to complete the level. \item \textit{Nodes Per Gold} - Since levels do not all have the same number of gold pieces, the total nodes explored metric on it's own could potential leave an inaccurate reflection of each level. As such, this metric gives the average number of nodes needed to reach each reachable piece of gold per level. The same rule applies as with the above metric, where the larger the number the longer it potentially could take to reach each gold piece. \end{itemize} \noindent These metrics allow us to compare between our two sets of generated levels. We are particularly interested in the second metric as a measure of playability and the fourth metric as a measure of global coherence. We report the other two metrics for context to these two metrics. If the Snodgrass and Ontan{\'o}n levels outperform our levels in terms of playability, that could indicate that our method for placing gold pieces along the player path is flawed. If the Snodgrass and Ontan{\'o}n levels outperform our levels or perform similarly in terms of nodes per gold metric, this would indicate that our inclusion of the player path did not improve global coherence, and led to similarly coherent/incoherent levels as a simpler Markov chain approach. \section{Qualitative Results} Table 1 includes all of our results in terms of our four metrics. The first column gives the average and standard deviation of the total number of gold pieces included in each generated level. This immediately demonstrates the impact of employing a Gaussian distribution to model the total number of gold pieces in a level, as opposed to leaving it up to a Markov chain alone to place gold pieces. The Snodgrass and Ontan{\'o}n levels have a much higher average and a much larger standard deviation. This indicates that their levels tended to have many more gold pieces on average compared to existing Lode Runner levels, and that this number varied to a great extent with the largest number of gold pieces being nearly three times as many for the Snodgrass and Ontan{\'o}n levels. This is an initial indication that our approach led to more coherent levels. \begin{table*}[tbh] \begin{tabular}{|l|c|c|c|c|} \hline & Gold Total & Percentage Collected & Total Nodes Explored & Nodes per Gold \\ \hline Snodgrass and Ontan{\'o}n & 18.76\rpm 15.07 & 98.94\rpm 6.98 & 4638.93\rpm 10837.32 & 220.62\rpm 477.89 \\ \hline Ours & 7.65\rpm 2.98 & 94.68\rpm 19.07 & 910.33\rpm 2047.28 & 116.44\rpm 264.82 \\ \hline \end{tabular} \caption{Quantitative Evaluation Results} \end{table*} The second column of Table 1 shows the average amount of gold that can be collected per level, indicating how many of the generated levels were playable. These values indicate that the Snodgrass and Ontan{\'o}n levels are on average more likely to be playable. However, there is more complexity to this value than might first appear. Since on average our levels have less gold than the Snodgrass and Ontan{\'o}n levels, the impact of being unable to reach a single gold piece is much higher. Collecting 6 of 7 available gold pieces will result in a lower percentage than 17 of 18 possible gold pieces. We also performed a Mann-Whitney U test to determine if these two distributions differed significantly. The test was unable to reject the null hypothesis ($p=0.05629$) that these two distributions arose from the same underlying distribution. Thus, we take this to mean that the difference in terms of playability was insignificant. As mentioned above, this runs counter to prior reported playability values \cite{snodgrass2016learning}, which is due to our choice not to model enemy locations. Thus, this playability metric should be considered an upper bound. \begin{figure} \caption{Boxplot Results of Nodes Explored Metric} \label{fig:boxplot} \end{figure} The third column of Table 1 gives the Total Nodes Explored metric, which we also visualize in Figure \ref{fig:boxplot} for clarity. It is immediately clear that the Snodgrass and Ontan{\'o}n generator leads to massively more explored nodes than our approach. However, this is not a fair comparison due to the higher average number of gold pieces among these levels and the larger variance of the number of gold pieces. Thus, we turn to the fourth column and the average number of nodes explored for each gold piece. This comparison is closer, only indicating that the Snodgrass and Ontan{\'o}n levels require twice as many nodes to be explored to reach each gold piece. However, this difference is still substantial. This indicates that, for each gold piece, it would take a player roughly twice the time to collect it for an average Snodgrass and Ontan{\'o}n level. This indicates that there is no clear path between the gold pieces in the Snodgrass and Ontan{\'o}n levels. The standard deviation also signifies that our levels are more consistent in terms of this metric. We take this as an indication of the greater global coherency of our generated levels, that these levels include clear paths for the player to take to reach most gold pieces, even if the placement of some of these gold pieces makes them unreachable. Our findings suggest that both our method and the method used by Snodgrass and Ontan{\'o}n produce levels that are roughly equally playable using our A* agent that ignored enemy positions. Looking at the metric values it would have been very difficult to improve on this metric without achieving 100 percent playability. Our analysis did show that we improved the consistency and reliability of the solutions to levels. \section{Qualitative Evaluation} In this section, we asses the quality of our output levels in comparison to the original levels. We did this in order to get a more nuanced look as to whether we have actually achieved greater global coherence. Given the results of our first set of evaluations, it's clear that our approach was able to produce levels with clearer and more consistent solutions. However, it's possible that our output levels no longer resemble the original Lode Runner levels. For example, they may have become too simple or have lost other aspects of global coherence. We use the following metrics to investigate this possibility: \begin{itemize} \item \textit{s} - The minimum size of the level to fit the path. Ideally, we'd like the levels to match the size of the original Lode Runner levels: a 32x22 grid. Since we do not explicitly enforce this, our hope is that our approach will have led to an implicit bias towards levels of this size. \item \textit{e} - The proportion of the level taken up by empty space. The original Lode Runner levels have a fair amount of variance when it comes to empty space, and empty space is often used strategically to create shapes or to indicate potential solutions. Thus, if our distribution of empty tiles matches those from the original levels, this would indicate a positive signal in terms of similar global structure. \item \textit{i} - The proportion of the room taken up by ``interesting'' tiles that are not simply solid or empty. It goes without saying that the way that the Lode Runner levels employ ladders, ropes, enemies, and gold pieces is very important to the overall design of the level. \end{itemize} \section{Results} \begin{figure} \caption{A good generated level.} \label{fig:good1} \end{figure} We generated 34 generated levels and compared these to the 34 original levels that we did not use to train our model. We found that 20 of our generated levels matched the expected size of 32x22, with 14 of our levels having a smaller size. This means that these levels could be expanded to fit the expected size while retaining their same generated structure. This is a positive sign, as our approach was able to implicitly lead to levels with the same or similar sizes to the original levels without explicitly modeling this constraint. Notably, none of the generated levels were larger than the original levels, though this may be due to the fact that we employed a constant generated path size of 103 (which was the average of the paths we extracted from the gameplay video). However, a generated path of 103 steps could still have led to a level larger than 32x22. Figure \ref{fig:figure_roi} and Figure \ref{fig:figure_space} show the distribution of ``interesting'' and empty space tiles in the generated and original levels respectively. The distribution of ``interesting'' tiles does suggest that our levels tended to lead to fewer interesting tiles compared to the original levels. However, the overall distributions are fairly similar. Further, it's possible that due to the 14 smaller levels the generated levels look more conservative than they truly are since this distribution does not take into account level size. The empty space distribution seems to differ more with the original levels employing much more empty space. This is not an unusual problem for Markov Chain models: filling in too much content. However, we again note that the 14 smaller levels may be part of the problem here. By filling the remaining space of these 14 smaller levels with empty space, the two distributions would look much more similar. \begin{figure} \caption{The distribution of "interesting" tiles.} \label{fig:figure_roi} \end{figure} \begin{figure} \caption{The distribution of empty space.} \label{fig:figure_space} \end{figure} \section{Discussion} There are 150 levels completed in the video we used for this paper. However, trimming and cropping each level from the video was time-consuming and repetitive. Therefore we just dealt with 66 levels to extract player paths. If all levels were processed, the results might be improved. Extracting path information from videos has a couple of challenges. Our method works well for most levels, but the levels with lots of stairs and levels with fake bricks (where there appears to be a brick, but it is actually an empty space as the player steps on it) will not work very well. Due to the low image quality, when the player walks passed the stairs, the combined figure becomes very difficult to recognize. This may sometimes lead to losing track of the player. The problem with the fake brick has a similar effect. When the player falls through a fake brick, the program fails to detect the player, hence losing track of the player. It is a limitation of our approach that we always assume a fixed path length. While we made this choice for simplicity, the original levels did not all have the same path length. As such, it would be better to model this path length value separately. Alternatively, we could generate the path until we hit the desired level size and then stop. \begin{figure} \caption{A bad generated level: badly placed enemies.} \label{fig:bad1} \end{figure} \begin{figure} \caption{A bad generated level: badly designed structure.} \label{fig:bad2} \end{figure} Figure \ref{fig:bad1} and Figure \ref{fig:bad2} show two typical issues that prevent the player from completing the generated levels. In Figure \ref{fig:bad1}, an enemy is directly placed beside the player, and there is no other way to go around or trap the enemy. This shows that randomly placing enemies along the player's path is not ideal, we will also need to take consideration that the player needs to have a way to deal with enemies on the path. We ignored this problem for our pathfinding-based evaluation, but we will need to confront it for future work. Figure\ref{fig:bad2} had a different problem. The generated structure led to the player getting stuck. This happens when the row where the player is at and the row above it both have path information. Then when filling the tiles, the system would make a mistake that the bottom row can be bricks, and the player can dig along the upper row to create a path to the lower row. But this will not work if the player started from the lower row. This situation would trip up our A* pathfinder, and was one of the factors that led to our lower average playability score. In this initial investigation we assumed controllability due to our input of player paths. While we can alter these paths and produce new levels based on them, we did not include any evaluation of this aspect of our research. This would be a difficult thing to evaluate without a human subject study, and so we leave it for future work.
4,075
11,913
en
train
0.70.3
\section{Results} \begin{figure} \caption{A good generated level.} \label{fig:good1} \end{figure} We generated 34 generated levels and compared these to the 34 original levels that we did not use to train our model. We found that 20 of our generated levels matched the expected size of 32x22, with 14 of our levels having a smaller size. This means that these levels could be expanded to fit the expected size while retaining their same generated structure. This is a positive sign, as our approach was able to implicitly lead to levels with the same or similar sizes to the original levels without explicitly modeling this constraint. Notably, none of the generated levels were larger than the original levels, though this may be due to the fact that we employed a constant generated path size of 103 (which was the average of the paths we extracted from the gameplay video). However, a generated path of 103 steps could still have led to a level larger than 32x22. Figure \ref{fig:figure_roi} and Figure \ref{fig:figure_space} show the distribution of ``interesting'' and empty space tiles in the generated and original levels respectively. The distribution of ``interesting'' tiles does suggest that our levels tended to lead to fewer interesting tiles compared to the original levels. However, the overall distributions are fairly similar. Further, it's possible that due to the 14 smaller levels the generated levels look more conservative than they truly are since this distribution does not take into account level size. The empty space distribution seems to differ more with the original levels employing much more empty space. This is not an unusual problem for Markov Chain models: filling in too much content. However, we again note that the 14 smaller levels may be part of the problem here. By filling the remaining space of these 14 smaller levels with empty space, the two distributions would look much more similar. \begin{figure} \caption{The distribution of "interesting" tiles.} \label{fig:figure_roi} \end{figure} \begin{figure} \caption{The distribution of empty space.} \label{fig:figure_space} \end{figure} \section{Discussion} There are 150 levels completed in the video we used for this paper. However, trimming and cropping each level from the video was time-consuming and repetitive. Therefore we just dealt with 66 levels to extract player paths. If all levels were processed, the results might be improved. Extracting path information from videos has a couple of challenges. Our method works well for most levels, but the levels with lots of stairs and levels with fake bricks (where there appears to be a brick, but it is actually an empty space as the player steps on it) will not work very well. Due to the low image quality, when the player walks passed the stairs, the combined figure becomes very difficult to recognize. This may sometimes lead to losing track of the player. The problem with the fake brick has a similar effect. When the player falls through a fake brick, the program fails to detect the player, hence losing track of the player. It is a limitation of our approach that we always assume a fixed path length. While we made this choice for simplicity, the original levels did not all have the same path length. As such, it would be better to model this path length value separately. Alternatively, we could generate the path until we hit the desired level size and then stop. \begin{figure} \caption{A bad generated level: badly placed enemies.} \label{fig:bad1} \end{figure} \begin{figure} \caption{A bad generated level: badly designed structure.} \label{fig:bad2} \end{figure} Figure \ref{fig:bad1} and Figure \ref{fig:bad2} show two typical issues that prevent the player from completing the generated levels. In Figure \ref{fig:bad1}, an enemy is directly placed beside the player, and there is no other way to go around or trap the enemy. This shows that randomly placing enemies along the player's path is not ideal, we will also need to take consideration that the player needs to have a way to deal with enemies on the path. We ignored this problem for our pathfinding-based evaluation, but we will need to confront it for future work. Figure\ref{fig:bad2} had a different problem. The generated structure led to the player getting stuck. This happens when the row where the player is at and the row above it both have path information. Then when filling the tiles, the system would make a mistake that the bottom row can be bricks, and the player can dig along the upper row to create a path to the lower row. But this will not work if the player started from the lower row. This situation would trip up our A* pathfinder, and was one of the factors that led to our lower average playability score. In this initial investigation we assumed controllability due to our input of player paths. While we can alter these paths and produce new levels based on them, we did not include any evaluation of this aspect of our research. This would be a difficult thing to evaluate without a human subject study, and so we leave it for future work. \section{Conclusions} In this research project, we developed a player path-based method for the generation of Lode Runner levels. We extracted player paths from gameplay video to serve as training data, then used an LSTM Seq2Seq model to generate new player paths, and applied Markov Chains to produce new levels based on these paths. Our experimental results show that this approach can lead to improved global coherence of the generated levels while still leading to levels that share a resemblance to the original levels. For future work, we hope to improve the proposed method to ensure playability, to use a more sophisticated pathfinding agent that takes into account things like enemy placement, and to test this approach on other games. \begin{acks} This work was funded by the Canada CIFAR AI Chairs Program. We acknowledge the support of the Alberta Machine Intelligence Institute (Amii). \end{acks} \appendix \end{document}
1,457
11,913
en
train
0.71.0
\begin{document} \begin{abstract} \jz{We are concerned with the direct and inverse scattering problems associated with a time-harmonic random Schr\"odinger equation with unknown source and potential terms. The well-posedness of the direct scattering problem is first established. Three uniqueness results are then obtained for the corresponding inverse problems in determining the variance of the source, the potential and the expectation of the source, respectively, by the associated far-field measurements. First, a single realization of the passive scattering measurement can uniquely recover the variance of the source without the a priori knowledge of the other unknowns. Second, if active scattering measurement can be further obtained, a single realization can uniquely recover the potential function without knowing the source. Finally, both the potential and the first two statistic moments of the random source can be uniquely recovered with full measurement data. The major novelty of our study is that on the one hand, both the random source and the potential are unknown, and on the other hand, both passive and active scattering measurements are used for the recovery in different scenarios.} \noindent{\bf Keywords:}~~random Schr\"odinger equation, inverse scattering, passive/active measurements, \jz{asymptotic expansion, ergodicity} {\noindent{\bf 2010 Mathematics Subject Classification:}~~35Q60, 35J05, 31B10, 35R30, 78A40} \end{abstract} \maketitle \section{Introduction} \label{sec:Intro-SchroEqu2018} In this paper, we are mainly concerned with the following random Schr\"odinger system \begin{subequations} \label{eq:1} \begin{numcases}{} \displaystyle{ (-\Delta-E+V(x)) u(x, E, d, \omega)= f(x)+\sigma(x)\dot{B}_x(\omega), \quad x\in\mathbb{R}^3, } \label{eq:1a} \\ \displaystyle{ u(x, E, d, \omega)=\alpha e^{\mathrm{i}\sqrt E x\cdot d}+u^{sc}(x, E, d,\omega), } \label{eq:1b} \\ \displaystyle{ \lim_{r\rightarrow\infty} r\left(\frac{\partial u^{sc}}{\partial r}-\mathrm{i}\sqrt{E} u^{sc} \right)=0,\quad r:=|x|, } \label{eq:1c} \end{numcases} \end{subequations} \hy{where $f(x)$ and $\sigma(x)$ in \eqref{eq:1a} are the expectation and standard variance of the source term, $d \in \mathbb{S}^2:=\{ x \in \R^3 \,;\, |x| = 1 \}$ signifies the impinging direction of the incident plane wave, and $E\in\mathbb{R}_+$ is the energy level. In \eqref{eq:1b}, $\alpha$ takes the value of either 0 or 1 to incur or suppress the presence of the incident wave, respectively. In the sequel, we follow the convention to replace $E$ with $k^2$, namely $k := \sqrt{E} \in \R_+$, which can be understood as the wave number. The limit in \eqref{eq:1c} is the Sommerfeld Radiation Condition (SRC) \cite{colton2012inverse} that characterizes the outgoing nature of the scattered wave field $u^{sc}$. The random system \eqref{eq:1} describes the quantum scattering associated with a potential $V$ and a random active source $(f, \sigma)$ at the energy level $k^2$.} \jz{In the system \eqref{eq:1}, the random parameter $\omega$ belongs to $\Omega$ with $(\Omega, \mathcal{F}, \mathbb{P})$ signifying a complete probability space. The term $\dot B_x(\omega)$ denotes the three-dimensional spatial Gaussian white noise \cite{dudley2002real}. The random part $\sigma(x) \dot B_x(\omega)$ within the source term in \eqref{eq:1a} is an ideal mathematical model for noises arising from real world applications \cite{dudley2002real}. We note that the $\sigma^2(x)$ gives the intensity of the randomness of the source at the point $x$, and can be understood as the variance of $\sigma(x) \dot B(x,\omega)$. In what follows, we call $\sigma^2(x)$ the variance function. The statistical information of a single zero-mean Gaussian white noise is encoded in its variance function \cite{ross2014introduction}. In this paper, we are mainly concerned with the recovery of the variance and expectation of the random source as well as the potential function in \eqref{eq:1} by the associated scattering measurements as described in what follows.} \hy{In order to study the corresponding inverse problems, one needs to have a thorough understanding of the direct scattering problem. In the deterministic case with $\sigma\equiv 0$, the scattering system \eqref{eq:1} is well understood; see, e.g., \cite{colton2012inverse,griffiths2016introduction}. There exists a unique solution $u^{sc}\in H^1_{loc}(\mathbb{R}^3)$, and moreover there holds the following asymptotic expansion as $|x|\rightarrow\infty$, \begin{equation}\label{eq:farfield} u^{sc}(x) = \frac{e^{\mathrm{i}k r}}{r} u^\infty(\hat x, k, d) + \mathcal{O} \left( \frac{1}{r^2} \right), \end{equation} where $\hat x := x/{|x|} \in \mathbb{S}^2$. The term $u^\infty$ is referred to as the far-field pattern, which encodes the information of the potential $V$ and the source $f$. In principle, we shall show that the random scattering system \eqref{eq:1} is also well-posed in a proper sense and possesses a far-field pattern. To that end, throughout the rest of the paper, we assume that $\sigma^2$, $V$, $f$ belong to $L^\infty(\mathbb{R}^3;\R)$, respectively, and that they are compactly supported in a fixed bounded domain $D\subset \mathbb{R}^3$ containing the origin. Under the aforementioned regularity assumption, we establish that the following mapping of the direct problem (\textbf{DP}) is well-posed in a proper sense, \begin{equation} \label{eq:dp-SchroEqu2018} \textbf{DP \ :} \quad (\sigma, V, f) \rightarrow \{u^{sc}(\hat x, k, d, \omega), u^\infty(\hat x, k, d, \omega) \,;\, \omega \in \Omega,\, \hat x \in \mathbb{S}^2, k > 0,\, d \in \mathbb{S}^2 \}. \end{equation} The well-posedness of the direct scattering problem paves the way for our further study of the inverse problem ({\bf IP}).} \sq{In {\bf IP}, we are concerned with the recoveries of the three unknowns $\sigma^2$, $V$, $f$ in a \emph{sequential} way, by knowledge of the associated far-field pattern measurements $u^\infty(\hat x, k, d, \omega)$. By sequential, we mean the $\sigma^2$, $V$, $f$ are recovered by the corresponding data sets one-by-one. In addition to this, in the recovery procedure, both the \emph{passive} and \emph{active} measurements are utilized. When $\alpha = 0$, the incident wave is suppressed and the scattering is solely generated by the unknown source. The corresponding far-field pattern is thus referred to as the passive measurement. In this case, the far-field pattern is independent of the incident direction $d$, and we denote it as $u^\infty(\hat x, k, \omega)$. When $\alpha = 1$, the scattering is generated by both the active source and the incident wave, and the far-field pattern is referred to as the active measurement, denoted as $u^\infty(\hat x, k, d, \omega)$. Under these settings, we formulate our {\bf IP} as} \sq{\begin{equation}\label{eq:ip-SchroEqu2018} \textbf{IP \ :}\quad \left\{ \begin{aligned} \mathcal M_1(\omega) := & \ \{u^\infty(\hat x, k, \omega) \,;\, \forall \hat x \in \mathbb{S}^2,\, \forall k \in \R_+ \} && \rightarrow \quad \sigma^2, \\ \mathcal M_2(\omega) := & \ \{u^\infty(\hat x, k, d, \omega) \,;\, \forall \hat x \in \mathbb{S}^2,\, \forall k \in \R_+,\, \forall d \in \mathbb{S}^2 \} && \rightarrow \quad V, \\ \mathcal M_3 := & \ \{u^\infty(\hat x, k, d, \omega) \,;\, \forall \hat x \in \mathbb{S}^2,\, \forall k \in \R_+,\, d\ \text{fixed},\, \forall \omega \in \Omega\, \} && \rightarrow \quad f. \end{aligned} \right. \end{equation} The data set $\mathcal M_1(\omega)$ (abbr.~$\mathcal M_1$) corresponds to the passive measurement ($\alpha = 0$), while the data sets $\mathcal M_2(\omega)$ (abbr.~$\mathcal M_2$) and $\mathcal M_3$ correspond to the active measurements ($\alpha = 1$). Different random sample $\omega$ gives different data sets $\mathcal M_1$ and $\mathcal M_2$. All of the $\sigma^2$, $V$, $f$ in the {\bf IP} are assumed to be unknown, and our study shows that the data sets $\mathcal M_1$, $\mathcal M_2$, $\mathcal M_3$ can recover $\sigma^2$, $V$, $f$, respectively. The mathematical arguments of our study are constructive and we derive explicitly recovery formulas, which can be employed for numerical reconstruction in future work.} In the aforementioned {\bf IP}, we are particularly interested in the case with a single realization, namely the sample $\omega$ is fixed in the recovery of $\sigma^2$ and $V$ in \eqref{eq:ip-SchroEqu2018}. \sq{Intuitively, a particular realization of $\dot B_x$ provides little information about the statistical properties of the random source. However, our study indicates that a \emph{single realization} of the far-field measurement can be used to uniquely recover the variance function and the potential in certain scenarios. A crucial assumption to make the single-realization recovery possible is that the randomness is independent of the wave number $k$. Indeed, there are assorted applications in which the randomness changes slowly or is independent of time \cite{caro2016inverse, Lassas2008}, and by Fourier transforming into the frequency domain, they actually correspond to the aforementioned situation. The single-realization recovery has been studied in the literature; see, e.g., \cite{caro2016inverse,Lassas2008,LassasA}. The idea of this work is mainly motivated by \cite{caro2016inverse}.} There are abundant literatures for the inverse scattering problem associated with either the passive or active measurements. Given an known potential, the recovery of an unknown source term by the corresponding passive measurement is referred to as the inverse source problem. We refer to \cite{bao2010multi,Bsource,BL2018,ClaKli,GS1,Isakov1990,IsaLu,Klibanov2013,KS1,WangGuo17,Zhang2015} and the references therein for both theoretical uniqueness/stability results and computational methods for the inverse source problem in the deterministic setting, namely $\sigma\equiv 0$. \sq{The authors are also aware of some study on the inverse source problem concerning the recovery of a random source \cite{LiLiinverse2018,LiHelinLiinverse2018}. In \cite{LiHelinLiinverse2018}, the homogeneous Helmholtz system with a random source is studied. Compared with \cite{LiHelinLiinverse2018}, our system \eqref{eq:1} comprises of both unknown source and unknown potential, which make the corresponding study radically more challenging.} The determination of a random source by the corresponding passive measurement was also recently studied in \cite{bao2016inverse,Lu1,Yuan1}, and the determination of a random potential by the corresponding active measurement was established in \cite{caro2016inverse}. We also refer to \cite{LassasA} and the references therein for more relevant studies on the determination of a random potential. The simultaneous recovery of an unknown source and its surrounding potential was also investigated in the literature. In \cite{KM1,liu2015determining}, motivated by applications in thermo- and photo-acoustic tomography, the simultaneous recovery of an unknown source and its surrounding medium parameter was considered. The simultaneous recovery study in \cite{KM1,liu2015determining} was confined to the deterministic setting and associated mainly with the passive measurement. In this paper, we consider the recovery of an unknown random source and an unknown potential term associated with the Schr\"odinger system \eqref{eq:1}. The major novelty of our unique recovery results compared to those existing ones in the literature is that on the one hand, both the random source and the potential are unknown, and on the other hand, we use both passive and active measurements for the unique recovery. We established three unique recovery results. \begin{thm} \label{thm:Unisigma-SchroEqu2018} \sq{Without knowing $V$ and $f$ in system \eqref{eq:1}, the data set $\mathcal M_1$ can recover $\sigma^2$ almost surely.} \end{thm} \begin{rem} Theorem~\ref{thm:Unisigma-SchroEqu2018} implies that the variance function can be uniquely recovered without \emph{a priori} knowledge of $f$ or $V$. \sq{Moreover, since the passive measurement $\mathcal M_1$ is used, Theorem \ref{thm:Unisigma-SchroEqu2018} indicates that the variance function can be uniquely recovered by a single realization of the passive scattering measurement. Moreover, for the sake of simplicity, we set the wave number $k$ in the definition of $\mathcal M_1$ to be running over all positive real numbers. But in practice, it is enough to let $k$ be greater than any fixed positive number. This remark equally applies to Theorem \ref{thm:UniPot1-SchroEqu2018}.} \end{rem} \begin{thm} \label{thm:UniPot1-SchroEqu2018} \sq{Without knowing $\sigma$ and $f$ in system \eqref{eq:1}, the data set $\mathcal M_2$ uniquely recovers the potential $V$.} \end{thm} \begin{rem} Theorem \ref{thm:UniPot1-SchroEqu2018} shows that the potential $V$ can be uniquely recovered without knowing the random source, namely $\sigma$ and $f$. Moreover, we only make use of a single realization of the active scattering measurement. \end{rem} \begin{thm} \label{thm:UniSou1-SchroEqu2018}
4,018
47,422
en
train
0.71.1
The mathematical arguments of our study are constructive and we derive explicitly recovery formulas, which can be employed for numerical reconstruction in future work.} In the aforementioned {\bf IP}, we are particularly interested in the case with a single realization, namely the sample $\omega$ is fixed in the recovery of $\sigma^2$ and $V$ in \eqref{eq:ip-SchroEqu2018}. \sq{Intuitively, a particular realization of $\dot B_x$ provides little information about the statistical properties of the random source. However, our study indicates that a \emph{single realization} of the far-field measurement can be used to uniquely recover the variance function and the potential in certain scenarios. A crucial assumption to make the single-realization recovery possible is that the randomness is independent of the wave number $k$. Indeed, there are assorted applications in which the randomness changes slowly or is independent of time \cite{caro2016inverse, Lassas2008}, and by Fourier transforming into the frequency domain, they actually correspond to the aforementioned situation. The single-realization recovery has been studied in the literature; see, e.g., \cite{caro2016inverse,Lassas2008,LassasA}. The idea of this work is mainly motivated by \cite{caro2016inverse}.} There are abundant literatures for the inverse scattering problem associated with either the passive or active measurements. Given an known potential, the recovery of an unknown source term by the corresponding passive measurement is referred to as the inverse source problem. We refer to \cite{bao2010multi,Bsource,BL2018,ClaKli,GS1,Isakov1990,IsaLu,Klibanov2013,KS1,WangGuo17,Zhang2015} and the references therein for both theoretical uniqueness/stability results and computational methods for the inverse source problem in the deterministic setting, namely $\sigma\equiv 0$. \sq{The authors are also aware of some study on the inverse source problem concerning the recovery of a random source \cite{LiLiinverse2018,LiHelinLiinverse2018}. In \cite{LiHelinLiinverse2018}, the homogeneous Helmholtz system with a random source is studied. Compared with \cite{LiHelinLiinverse2018}, our system \eqref{eq:1} comprises of both unknown source and unknown potential, which make the corresponding study radically more challenging.} The determination of a random source by the corresponding passive measurement was also recently studied in \cite{bao2016inverse,Lu1,Yuan1}, and the determination of a random potential by the corresponding active measurement was established in \cite{caro2016inverse}. We also refer to \cite{LassasA} and the references therein for more relevant studies on the determination of a random potential. The simultaneous recovery of an unknown source and its surrounding potential was also investigated in the literature. In \cite{KM1,liu2015determining}, motivated by applications in thermo- and photo-acoustic tomography, the simultaneous recovery of an unknown source and its surrounding medium parameter was considered. The simultaneous recovery study in \cite{KM1,liu2015determining} was confined to the deterministic setting and associated mainly with the passive measurement. In this paper, we consider the recovery of an unknown random source and an unknown potential term associated with the Schr\"odinger system \eqref{eq:1}. The major novelty of our unique recovery results compared to those existing ones in the literature is that on the one hand, both the random source and the potential are unknown, and on the other hand, we use both passive and active measurements for the unique recovery. We established three unique recovery results. \begin{thm} \label{thm:Unisigma-SchroEqu2018} \sq{Without knowing $V$ and $f$ in system \eqref{eq:1}, the data set $\mathcal M_1$ can recover $\sigma^2$ almost surely.} \end{thm} \begin{rem} Theorem~\ref{thm:Unisigma-SchroEqu2018} implies that the variance function can be uniquely recovered without \emph{a priori} knowledge of $f$ or $V$. \sq{Moreover, since the passive measurement $\mathcal M_1$ is used, Theorem \ref{thm:Unisigma-SchroEqu2018} indicates that the variance function can be uniquely recovered by a single realization of the passive scattering measurement. Moreover, for the sake of simplicity, we set the wave number $k$ in the definition of $\mathcal M_1$ to be running over all positive real numbers. But in practice, it is enough to let $k$ be greater than any fixed positive number. This remark equally applies to Theorem \ref{thm:UniPot1-SchroEqu2018}.} \end{rem} \begin{thm} \label{thm:UniPot1-SchroEqu2018} \sq{Without knowing $\sigma$ and $f$ in system \eqref{eq:1}, the data set $\mathcal M_2$ uniquely recovers the potential $V$.} \end{thm} \begin{rem} Theorem \ref{thm:UniPot1-SchroEqu2018} shows that the potential $V$ can be uniquely recovered without knowing the random source, namely $\sigma$ and $f$. Moreover, we only make use of a single realization of the active scattering measurement. \end{rem} \begin{thm} \label{thm:UniSou1-SchroEqu2018} \sq{In system \eqref{eq:1}, suppose that $\sigma$ is unknown and the potential $V$ is known in advance. Then there exists a positive constant $C$ that depends only on $D$ such that if $\nrm[L^\infty(\R^3)]{V} < C$, the data set $\mathcal M_3$ can uniquely recover the expectation $f$.} \end{thm} \jz{The rest of the paper is outlined as follows. In Section \ref{sec:MADP-SchroEqu2018}, we present the mathematical analysis of the forward scattering problem given in \eqref{eq:1}. Section \ref{sec:AsyEst-SchroEqu2018} establishes some asymptotic estimates, which are of key importance in the recovery of the variance function. In Section \ref{sec:RecVar-SchroEqu2018}, we prove the first recovery result of the variance function with a single realization of the passive scattering measurement. Section \ref{sec:RecPS-SchroEqu2018} is devoted to the second and third recovery results of the potential and the random source. We conclude the work with some discussions in \mbox{Section \ref{sec:Conclusions-SchroEqu2018}.}} \sq{ \section{Mathematical analysis of the direct problem} \label{sec:MADP-SchroEqu2018}} \sq{In this section, the uniqueness and existence of a {\it mild solution} is established for the system \eqref{eq:1}. Before analyzing the direct problem, some preparations are made in the beginning. In Section \ref{subsec:NotandAss-SchroEqu2018}, we introduce some preliminaries which are used throughout the rest of the paper. Some technical lemmas that are necessary for the analysis of both the direct and inverse problems are presented in Section \ref{subsec:STLemmas-SchroEqu2018}. In Section \ref{subset:WellDefined-SchroEqu2018}, we give the well-posedness of the direct problem.} \subsection{Preliminaries} \label{subsec:NotandAss-SchroEqu2018} \sq{Let us first introduce the generalized Gaussian white noise $\dot B_x(\omega)$ \cite{kusuoka1982support}.} To give a brief introduction, we write $\dot B_x(\omega)$ temporarily as $\dot B(x,\omega)$. It is known that $\dot B(\cdot,\omega) \in H_{loc}^{-3/2-\epsilon}(\R^3)$ almost surely for any $\epsilon\in\mathbb{R}_+$ \cite{kusuoka1982support}. Then $\dot B \colon \omega \in \Omega \mapsto \dot B(\cdot,\omega) \in \mathscr{D}'(D)$ defines a map from the probability space to the space of the generalized functions. Here, $\mathscr{D}(D)$ signifies the space consisting of smooth functions that are compactly supported in $D$, and $\mathscr{D}'(D)$ signifies its dual space. For any $\varphi \in \mathscr{D}(D)$, $\dot B \colon \omega \in \Omega \mapsto \agl[\dot B(x,\omega), \varphi(x)] \in \R$ is assumed to be a Gaussian random variable with zero-mean and $\int_{D} |\varphi(x)|^2 \dif{x}$ as its variance. We also recall that a function $\psi$ in $L_{loc}^1(\R^n)$ defines a distribution through ${\agl[\psi,\varphi] = \int_{\R^n} \psi(x) \varphi(x) \dif{x}}$ \cite{caro2016inverse}. Then $\dot B(x,\omega)$ satisfies: \[ \agl[\dot B(\cdot,\omega), \varphi(\cdot)] \sim \mathcal{N}(0,\nrm[L^2(D)]{\varphi}^2), \quad \forall \varphi \in \mathscr{D}(D). \] Moreover, the covariance of the $\dot B(x,\omega)$ is assumed to satisfy the following property. For every $\varphi$, $\psi$ in $\mathscr{D}(D)$, the covariance between $\agl[\dot B(\cdot,\omega), \varphi]$ and $\agl[\dot B(\cdot,\omega), \psi]$ is defined as $\int_{D} \varphi(x) \psi(x) \dif{x}$: \begin{equation} \label{eq:ItoIso-SchroEqu2018} \mathbb{E} \big( \agl[\dot B(\cdot,\omega), \varphi] \agl[\dot B(\cdot,\omega), \psi] \big) := \int_{D} \varphi(x) \psi(x) \dif{x}. \end{equation} These aforementioned definitions can be generalized to the case where $\varphi, \psi \in L^2(D)$ by the density arguments.
2,681
47,422
en
train
0.71.2
\end{rem} \begin{thm} \label{thm:UniPot1-SchroEqu2018} \sq{Without knowing $\sigma$ and $f$ in system \eqref{eq:1}, the data set $\mathcal M_2$ uniquely recovers the potential $V$.} \end{thm} \begin{rem} Theorem \ref{thm:UniPot1-SchroEqu2018} shows that the potential $V$ can be uniquely recovered without knowing the random source, namely $\sigma$ and $f$. Moreover, we only make use of a single realization of the active scattering measurement. \end{rem} \begin{thm} \label{thm:UniSou1-SchroEqu2018} \sq{In system \eqref{eq:1}, suppose that $\sigma$ is unknown and the potential $V$ is known in advance. Then there exists a positive constant $C$ that depends only on $D$ such that if $\nrm[L^\infty(\R^3)]{V} < C$, the data set $\mathcal M_3$ can uniquely recover the expectation $f$.} \end{thm} \jz{The rest of the paper is outlined as follows. In Section \ref{sec:MADP-SchroEqu2018}, we present the mathematical analysis of the forward scattering problem given in \eqref{eq:1}. Section \ref{sec:AsyEst-SchroEqu2018} establishes some asymptotic estimates, which are of key importance in the recovery of the variance function. In Section \ref{sec:RecVar-SchroEqu2018}, we prove the first recovery result of the variance function with a single realization of the passive scattering measurement. Section \ref{sec:RecPS-SchroEqu2018} is devoted to the second and third recovery results of the potential and the random source. We conclude the work with some discussions in \mbox{Section \ref{sec:Conclusions-SchroEqu2018}.}} \sq{ \section{Mathematical analysis of the direct problem} \label{sec:MADP-SchroEqu2018}} \sq{In this section, the uniqueness and existence of a {\it mild solution} is established for the system \eqref{eq:1}. Before analyzing the direct problem, some preparations are made in the beginning. In Section \ref{subsec:NotandAss-SchroEqu2018}, we introduce some preliminaries which are used throughout the rest of the paper. Some technical lemmas that are necessary for the analysis of both the direct and inverse problems are presented in Section \ref{subsec:STLemmas-SchroEqu2018}. In Section \ref{subset:WellDefined-SchroEqu2018}, we give the well-posedness of the direct problem.} \subsection{Preliminaries} \label{subsec:NotandAss-SchroEqu2018} \sq{Let us first introduce the generalized Gaussian white noise $\dot B_x(\omega)$ \cite{kusuoka1982support}.} To give a brief introduction, we write $\dot B_x(\omega)$ temporarily as $\dot B(x,\omega)$. It is known that $\dot B(\cdot,\omega) \in H_{loc}^{-3/2-\epsilon}(\R^3)$ almost surely for any $\epsilon\in\mathbb{R}_+$ \cite{kusuoka1982support}. Then $\dot B \colon \omega \in \Omega \mapsto \dot B(\cdot,\omega) \in \mathscr{D}'(D)$ defines a map from the probability space to the space of the generalized functions. Here, $\mathscr{D}(D)$ signifies the space consisting of smooth functions that are compactly supported in $D$, and $\mathscr{D}'(D)$ signifies its dual space. For any $\varphi \in \mathscr{D}(D)$, $\dot B \colon \omega \in \Omega \mapsto \agl[\dot B(x,\omega), \varphi(x)] \in \R$ is assumed to be a Gaussian random variable with zero-mean and $\int_{D} |\varphi(x)|^2 \dif{x}$ as its variance. We also recall that a function $\psi$ in $L_{loc}^1(\R^n)$ defines a distribution through ${\agl[\psi,\varphi] = \int_{\R^n} \psi(x) \varphi(x) \dif{x}}$ \cite{caro2016inverse}. Then $\dot B(x,\omega)$ satisfies: \[ \agl[\dot B(\cdot,\omega), \varphi(\cdot)] \sim \mathcal{N}(0,\nrm[L^2(D)]{\varphi}^2), \quad \forall \varphi \in \mathscr{D}(D). \] Moreover, the covariance of the $\dot B(x,\omega)$ is assumed to satisfy the following property. For every $\varphi$, $\psi$ in $\mathscr{D}(D)$, the covariance between $\agl[\dot B(\cdot,\omega), \varphi]$ and $\agl[\dot B(\cdot,\omega), \psi]$ is defined as $\int_{D} \varphi(x) \psi(x) \dif{x}$: \begin{equation} \label{eq:ItoIso-SchroEqu2018} \mathbb{E} \big( \agl[\dot B(\cdot,\omega), \varphi] \agl[\dot B(\cdot,\omega), \psi] \big) := \int_{D} \varphi(x) \psi(x) \dif{x}. \end{equation} These aforementioned definitions can be generalized to the case where $\varphi, \psi \in L^2(D)$ by the density arguments. The $\delta(x) \dot B(x,\omega)$ is defined as \begin{equation} \label{eq:sigmaB-SchroEqu2018} \delta(x) \dot B(x,\omega) \colon \varphi \in L^2(D) \mapsto \agl[\dot B(\cdot,\omega), \delta(\cdot)\varphi(\cdot)] \in \R. \end{equation} Secondly, let's set $$\Phi(x,y) = \Phi_k(x,y) := \frac {e^{ik|x-y|}}{4\pi|x-y|}, \quad x\in\mathbb{R}^3\backslash\{y\}.$$ $\Phi_k$ is the outgoing fundamental solution, centered at $y$, to the differential operator $-\Delta-k^2$. Define the resolvent operator ${\mathcal{R}_{k}}$, \begin{equation} \label{eq:DefnRk-SchroEqu2018} {\mathcal{R}_{k}}(\varphi)(x) = ({\mathcal{R}_{k}} \varphi)(x) := \int_{\mathop{\rm supp} \varphi} \Phi_k(x,y) \varphi(y) \dif{y}, \quad x \in \R^3, \end{equation} where $\varphi$ can be any measurable function on $\mathbb{R}^3$ as long as the \eqref{eq:DefnRk-SchroEqu2018} is well-defined for almost all $x$ in $\R^3$. Similar to the \eqref{eq:DefnRk-SchroEqu2018}, we define ${\mathcal{R}_{k}}(\delta \dot{B}_x)(\omega)$ as \begin{equation} \label{eq:RkSigmaBDefn-SchroEqu2018} {\mathcal{R}_{k}}(\delta \dot{B}_x)(\omega) := \agl[\dot B(\cdot,\omega), \delta(\cdot) \Phi(x,\cdot)], \end{equation} for any $\delta \in L^{\infty}(\R^3)$ with $\mathop{\rm supp} \delta \subseteq D$. We write ${\mathcal{R}_{k}}(\delta \dot{B}_x)(\omega)$ as ${\mathcal{R}_{k}}(\delta \dot{B}_x)$ for short. We may also write ${\mathcal{R}_{k}}(\delta \dot{B}_x)$ as $\int_{\R^3} \Phi_k(x,y) \delta(y) \dot B_y \dif{y}$ or $\int_{\R^3} \Phi_k(x,y) \delta(y) \dif{B_y}$. We may omit the subscript $x$ in ${\mathcal{R}_{k}}(\delta \dot B_x)$ if it is clear in the context. Write $\agl[x] := (1+|x|^2)^{1/2}$ for $x \in \R^3$. We introduce the following weighted $L^2$-norm and the corresponding function space over $\R^3$ for any $s \in \R$, \begin{equation} \label{eq:WetdSpace-SchroEqu2018} \left\{ \begin{aligned} \nrm[L_{s}^2(\R^3)]{f} & := \nrm[L^2(\R^3)]{\agl[\cdot]^{s} f(\cdot)} = \Big( \int_{\R^3} \agl[x]^{2s} |f|^2 \dif{x} \Big)^{\frac 1 2}, \\ L_{s}^2(\R^3) & := \left\{ f\in L_{loc}^1(\mathbb{R}^3); \nrm[L_{s}^2(\R^3)]{f} < +\infty \right\}. \end{aligned}\right. \end{equation} We also define $L_{s}^2(S)$ for any measurable subset $S$ in $\R^3$ by replacing $\R^3$ in \eqref{eq:WetdSpace-SchroEqu2018} with $S$. In what follows, we may denote $L_s^2(\R^3)$ as $L_s^2$ for short if without ambiguities. \jz{In the sequel, we write $\mathcal{L}(\mathcal A, \mathcal B)$ to denote the set of all the linear bounded mappings from a norm vector space $\mathcal A$ to a norm vector space $\mathcal B$. For any mapping $\mathcal K \in \mathcal{L}(\mathcal A, \mathcal B)$, we denote its operator norm as $\nrm[\mathcal{L}(\mathcal A, \mathcal B)]{\mathcal K}$. We write the identity operator as $I$. We also use notations $C$ and its variants, such as $C_D$ and $C_{D,f}$ to represent some generic constant(s) whose particular definition may change line by line. We use $\mathcal{A}\lesssim \mathcal{B}$ to signify $\mathcal{A}\leq C \mathcal{B}$ and $\mathcal{A} \simeq \mathcal{B}$ to signify $\mathcal{A} = C \mathcal{B}$, for some generic positive constant $C$. We denote ``almost everywhere'' as~``a.e.''~and ``almost surely'' as~``a.s.''~for short. We use $|\mathcal S|$ to denote the Lebesgue measure of any Lebesgue-measurable set $\mathcal S$. Define $M(x) = \sup_{y \in D}|x-y|$, and $\text{diam}\,D := \sup_{x,y \in D} |x-y|$, where $D$ is the bounded domain containing $\mathop{\rm supp} \sigma$, $\mathop{\rm supp} V$, $\mathop{\rm supp} f$ and the origin. Thus we have $M(0) \leq \text{diam}\,D < \infty$. It can be verified that \begin{equation} \label{eq:Contain-SchroEqu2018} \{ y-x \in \R^3 ; |x| \leq 2 M(0), y \in D \} \subseteq \{ z \in \R^3 ; |z| \leq 3\,\text{diam}\,D \}. \end{equation}} This is because $|y-x| \leq |y| + |x| \leq \text{diam}\,D + 2M(0) \leq 3\text{diam}\,D$. \subsection{Several technical lemmas} \label{subsec:STLemmas-SchroEqu2018} Several important technical lemmas are presented here. \begin{lem} \label{lemma:RkBoundedR3-SchroEqu2018} For any $\varphi \in L^\infty(\R^3)$ with $\mathop{\rm supp} \varphi \subseteq D$ and any $\epsilon \in \mathbb{R}_+$, we have \[ {\mathcal{R}_{k}} \varphi \in L_{-1/2-\epsilon}^2. \] \end{lem}
3,196
47,422
en
train
0.71.3
$M(0) \leq \text{diam}\,D < \infty$. It can be verified that \begin{equation} \label{eq:Contain-SchroEqu2018} \{ y-x \in \R^3 ; |x| \leq 2 M(0), y \in D \} \subseteq \{ z \in \R^3 ; |z| \leq 3\,\text{diam}\,D \}. \end{equation}} This is because $|y-x| \leq |y| + |x| \leq \text{diam}\,D + 2M(0) \leq 3\text{diam}\,D$. \subsection{Several technical lemmas} \label{subsec:STLemmas-SchroEqu2018} Several important technical lemmas are presented here. \begin{lem} \label{lemma:RkBoundedR3-SchroEqu2018} For any $\varphi \in L^\infty(\R^3)$ with $\mathop{\rm supp} \varphi \subseteq D$ and any $\epsilon \in \mathbb{R}_+$, we have \[ {\mathcal{R}_{k}} \varphi \in L_{-1/2-\epsilon}^2. \] \end{lem} \begin{proof}[Proof of Lemma \ref{lemma:RkBoundedR3-SchroEqu2018}] Assume that $\varphi$ belongs to $L^\infty(\R^3)$ with its support contained in $D$. \jz{Obviously we have that $\nrm[L^2(D)]{\varphi} < +\infty$. Using the Cauchy-Schwarz inequality we have \begin{align} \label{eq:Rk1-SchroEqu2018} \nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}} \varphi}^2 & \lesssim \int_{\R^3} \agl[x]^{-1-2\epsilon} \big( \int_D \frac{1}{|x-y|^2} \dif{y} \big) \cdot \big( \int_D |\varphi(y)|^2 \dif{y} \big) \dif{x} \nonumber\\ & \lesssim \nrm[L^2(D)]{\varphi}^2 \Big[ \int_{|x| \leq 2M(0)} \big( \int_D \frac{1}{|x-y|^2} \dif{y} \big) \dif{x} \nonumber\\ & \quad\quad + \int_{|x| > 2M(0)} \agl[x]^{-1-2\epsilon} \agl[x]^{-2} \dif{x} \Big]. \end{align} By the change of variable, the first term in the square brackets in \eqref{eq:Rk1-SchroEqu2018} satisfies \begin{equation} \label{eq:Rk2-SchroEqu2018} \int_{|x| \leq 2M(0)} \big( \int_D \frac{1}{|x-y|^2} \dif{y} \big) \dif{x} = \int_{|x| \leq 2M(0)} \big( \int_{z \in \{ y-x \,;\, y \in D \}} \frac{1}{|z|^2} \dif{z} \big) \dif{x}. \end{equation} From \eqref{eq:Contain-SchroEqu2018}, we can continue \eqref{eq:Rk2-SchroEqu2018} as \begin{align} \int_{|x| \leq 2M(0)} \big( \int_D \frac{1}{|x-y|^2} \dif{y} \big) \dif{x} & \leq \int_{|x| \leq 2M(0)} \big( \int_{\{z \,;\, |z| \leq 3\,\text{diam}\,D \}} \frac{1}{|z|^2} \dif{z} \big) \dif{x} \nonumber\\ & = \int_{|x| \leq 2M(0)} \big( 12\pi\,\text{diam}\,D \big) \dif{x} < +\infty. \label{eq:Rk3-SchroEqu2018} \end{align} Meanwhile, the second term in the square brackets in \eqref{eq:Rk1-SchroEqu2018} satisfies \begin{equation} \label{eq:Rk4-SchroEqu2018} \int_{|x| > 2M(0)} \agl[x]^{-1-2\epsilon} \agl[x]^{-2} \dif{x} \leq \int_{\R^3} \agl[x]^{-3-2\epsilon} \dif{x} < +\infty. \end{equation} Note that \eqref{eq:Rk4-SchroEqu2018} holds for every $\epsilon \in \mathbb{R}_+$. Combining \eqref{eq:Rk1-SchroEqu2018}, \eqref{eq:Rk3-SchroEqu2018} and \eqref{eq:Rk4-SchroEqu2018}, we conclude \[ \nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}} \varphi}^2 < +\infty. \] } The proof is complete. \end{proof}
1,448
47,422
en
train
0.71.4
\jz{Now we present a special version of Agmon's estimates for the convenience of our reader (cf. \cite{eskin2011lectures}). This special version will be used when proving Lemma \ref{lemma:RkVBoundedR3-SchroEqu2018}.} \begin{lem}[Agmon's estimates \cite{eskin2011lectures}] \label{lemma:AgmonEst-SchroEqu2018} For any $\epsilon > 0$, \jz{there exists some $k_0 \geq 2$ such that for any $k > k_0$} we have \begin{equation} \label{eq:AgmonEst-SchroEqu2018} \nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}} \varphi} \leq C_\epsilon k^{-1} \nrm[L_{1/2+\epsilon}^2]{\varphi}, \quad \forall \varphi \in L_{1/2+\epsilon}^2 \end{equation} where $C_\epsilon$ is independent of $k$ {and $\varphi$}. \end{lem} \sq{The proof of Lemma \ref{lemma:AgmonEst-SchroEqu2018} can be found in \cite{eskin2011lectures}. The symbol $k_0$ appearing in Lemma \ref{lemma:AgmonEst-SchroEqu2018} is preserved for future use.} \begin{lem} \label{lemma:RkVBoundedR3-SchroEqu2018} For any fixed $\epsilon \geq 0$, when $k > k_0$, we have $$\nrm[\mathcal{L}(L_{-1/2-\epsilon}^2, L_{-1/2-\epsilon}^2)]{{\mathcal{R}_{k}} \circ V} \leq C_{\epsilon,D,V} k^{-1},$$ where the constant $C_{\epsilon,D,V}$ depends on $\epsilon, D$ and $V$ but is independent of $k$. \end{lem} \begin{proof}[Proof of Lemma \ref{lemma:RkVBoundedR3-SchroEqu2018}] By Lemma \ref{lemma:AgmonEst-SchroEqu2018}, when $k > k_0$, we have the following estimate, $$\nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}} V u} = \nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}} (Vu)} \leq C_\epsilon k^{-1} \nrm[L_{1/2+\epsilon}^2]{Vu}.$$ Due to the boundedness of $\mathop{\rm supp} V$, there holds $\nrm[L_{1/2+\epsilon}^2]{Vu} \leq C_{D,V} \nrm[L_{-1/2-\epsilon}^2]{u}$ for some constant $C_{D,V}$ depending on $D$ and $V$ but independent of $u$ and $\epsilon$. Thus, we have $$\nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}} Vu} \leq C_{\epsilon,D,V} k^{-1} \nrm[L_{-1/2-\epsilon}^2]{u}.$$ The proof is complete. \end{proof} \sq{In the rest of the paper, we use $k^*$ to represent the maximum between the quantity $k_0$ originated from Lemma \ref{lemma:AgmonEst-SchroEqu2018} and the quantity \[ \sup_{k \in \R_+} \{ k \,;\, \nrm[\mathcal{L}{(L_{-1/2-\epsilon}^2, L_{-1/2-\epsilon}^2)}]{\mathcal{R}_k V} \geq 1 \} \ + \ 1. \] This choice of $k^*$ guarantees that if $k \geq k^*$, both the inequality \eqref{eq:AgmonEst-SchroEqu2018} and the Neumann expansion \( {(I - \mathcal{R}_k V)^{-1} \ = \ \sum_{j \geq 0} (\mathcal{R}_k V)^j} \) in $L_{-1/2-\epsilon}^2$ hold. For the subsequent analysis we also need a local version of Lemma \ref{lemma:RkVBoundedR3-SchroEqu2018}.} \begin{lem} \label{lemma:RkVBounded-SchroEqu2018} When $k > k_0$, we have \begin{equation} \label{eq:RkVbdd-SchroEqu2018} \nrm[\mathcal{L}(L^2(D), L^2(D))]{{\mathcal{R}_{k}} V} \leq C_{D,V} k^{-1}, \end{equation} for some constant $C_{D,V}$ depending on $D$ and $V$ but independent of $k$. Moreover, for every $\varphi \in L^2(\R^3)$ with $\mathop{\rm supp} \varphi \subseteq D$, then \begin{equation} \label{eq:VRkbdd-SchroEqu2018} \nrm[L^2(D)]{V{\mathcal{R}_{k}} \varphi} \leq C_{D,V} k^{-1} \nrm[L^2(D)]{\varphi}, \end{equation} for some constant $C_{D,V}$ depending on $D$ and $V$ but independent of $\varphi$ and $k$. \end{lem} \begin{proof} \jz{For any $\varphi \in L^2(D)$, thanks to the boundedness of $D$ we have \begin{equation} \label{eq:RkVbddInter1-SchroEqu2018} \nrm[L^2(D)]{{\mathcal{R}_{k}} V\varphi} \leq C_D \nrm[L_{-1}^2]{{\mathcal{R}_{k}} (V\varphi)}. \end{equation} By Lemma \ref{lemma:AgmonEst-SchroEqu2018} (letting the $\epsilon$ in Lemma \ref{lemma:AgmonEst-SchroEqu2018} be $\frac 1 2$), we conclude that \begin{equation} \label{eq:RkVbddInter2-SchroEqu2018} \nrm[L_{-1}^2]{{\mathcal{R}_{k}} (V\varphi)} \leq C k^{-1} \nrm[L_1^2]{V\varphi}. \end{equation} By virtue of the boundedness of $V$, we have \begin{equation} \label{eq:RkVbddInter3-SchroEqu2018} \nrm[L_1^2]{V\varphi} \leq C_{D,V}\nrm[L^2(D)]{\varphi}. \end{equation} Combining \eqref{eq:RkVbddInter1-SchroEqu2018}-\eqref{eq:RkVbddInter3-SchroEqu2018}, we arrive at \eqref{eq:RkVbdd-SchroEqu2018}. To prove \eqref{eq:VRkbdd-SchroEqu2018}, by Lemma \ref{lemma:AgmonEst-SchroEqu2018}, we have $$\nrm[L^2(D)]{{\mathcal{R}_{k}} \varphi} \leq C_D \nrm[L_{-1}^2]{{\mathcal{R}_{k}} \varphi} \leq C_D k^{-1} \nrm[L_1^2]{\varphi} \leq C_D k^{-1} \nrm[L^2(D)]{\varphi}.$$ Therefore, $$\nrm[L^2(D)]{V{\mathcal{R}_{k}} \varphi} \leq \nrm[L^\infty(D)]{V} \cdot \nrm[L^2(D)]{{\mathcal{R}_{k}} \varphi} \leq C_{D,V} k^{-1} \nrm[L^2(D)]{\varphi}.$$ The proof is complete.} \end{proof} Lemma \ref{lemma:RkSigmaB-SchroEqu2018} shows some basic properties of ${\mathcal{R}_{k}}(\sigma \dot{B}_x)$ defined in \eqref{eq:RkSigmaBDefn-SchroEqu2018}. \begin{lem} \label{lemma:RkSigmaB-SchroEqu2018} We have $${\mathcal{R}_{k}}(\sigma \dot B_x) \in L_{-1/2-\epsilon}^2 \quad \textrm{~a.s.~}.$$ Moreover, we have \begin{equation*} \mathbb{E} \nrm[L^2(D)]{{\mathcal{R}_{k}}(\sigma \dot B_x)} < C < +\infty \end{equation*} for some constant $C$ independent of $k$. \end{lem} \begin{proof} From \eqref{eq:RkSigmaBDefn-SchroEqu2018}, \eqref{eq:sigmaB-SchroEqu2018} and \eqref{eq:ItoIso-SchroEqu2018}, one can compute, \begin{align*} \mathbb{E} ( \nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}}(\sigma \dot{B}_x)}^2 ) & = \int_{\R^3} \agl[x]^{-1-2\epsilon} \mathbb{E} \big( \agl[\dot B(\cdot,\omega), \sigma(\cdot) \Phi(x,\cdot)] \agl[\dot B(\cdot,\omega), \sigma(\cdot) \overline{\Phi}(x,\cdot)] \big) \dif{x} \\ & = \int_{\R^3} \agl[x]^{-1-2\epsilon} \int_D \sigma^2(y) \frac 1 {16\pi^2 |x-y|^2} \dif{y} \dif{x} \\ & \leq C \nrm[L^\infty(D)]{\sigma}^2 \int_{\R^3} \agl[x]^{-1-2\epsilon} \int_D |x-y|^{-2} \dif{y} \dif{x}. \end{align*} By arguments similar to the ones used in the proof of Lemma \ref{lemma:RkBoundedR3-SchroEqu2018} we arrive at \begin{equation} \label{eq:Rksigma2Bounded-SchroEqu2018} \mathbb{E} (\nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}}(\sigma \dot{B}_x)}^2) \leq C_D < +\infty, \end{equation} for some constant $C_D$ depending on $D$ but not on $k$. By the H\"older inequality applied to the probability measure, (\ref{eq:Rksigma2Bounded-SchroEqu2018}) gives \begin{equation} \label{eq:RksigmaBoundedCD-SchroEqu2018} \mathbb{E} (\nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}}(\sigma \dot{B}_x)}) \leq [ \mathbb{E} ( \nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}}(\sigma \dot{B}_x)}^2 ) ]^{1/2} \leq \sq{C_D^{1/2}} < +\infty, \end{equation} for some constant $C_D$ independent of $k$. The inequality (\ref{eq:RksigmaBoundedCD-SchroEqu2018}) gives $${\mathcal{R}_{k}}(\sigma \dot B_x) \in L_{-1/2-\epsilon}^2 \quad \textrm{~a.s.~}.$$ By replacing $\R^3$ with $D$ and deleting all the terms $\agl[x]^{-1-2\epsilon}$ in the derivations above, one arrives at $\mathbb{E} \nrm[L^2(D)]{{\mathcal{R}_{k}}(\sigma \dot{B}_x)} < +\infty$. The proof is done. \end{proof}
3,195
47,422
en
train
0.71.5
Lemma \ref{lemma:RkSigmaB-SchroEqu2018} shows some basic properties of ${\mathcal{R}_{k}}(\sigma \dot{B}_x)$ defined in \eqref{eq:RkSigmaBDefn-SchroEqu2018}. \begin{lem} \label{lemma:RkSigmaB-SchroEqu2018} We have $${\mathcal{R}_{k}}(\sigma \dot B_x) \in L_{-1/2-\epsilon}^2 \quad \textrm{~a.s.~}.$$ Moreover, we have \begin{equation*} \mathbb{E} \nrm[L^2(D)]{{\mathcal{R}_{k}}(\sigma \dot B_x)} < C < +\infty \end{equation*} for some constant $C$ independent of $k$. \end{lem} \begin{proof} From \eqref{eq:RkSigmaBDefn-SchroEqu2018}, \eqref{eq:sigmaB-SchroEqu2018} and \eqref{eq:ItoIso-SchroEqu2018}, one can compute, \begin{align*} \mathbb{E} ( \nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}}(\sigma \dot{B}_x)}^2 ) & = \int_{\R^3} \agl[x]^{-1-2\epsilon} \mathbb{E} \big( \agl[\dot B(\cdot,\omega), \sigma(\cdot) \Phi(x,\cdot)] \agl[\dot B(\cdot,\omega), \sigma(\cdot) \overline{\Phi}(x,\cdot)] \big) \dif{x} \\ & = \int_{\R^3} \agl[x]^{-1-2\epsilon} \int_D \sigma^2(y) \frac 1 {16\pi^2 |x-y|^2} \dif{y} \dif{x} \\ & \leq C \nrm[L^\infty(D)]{\sigma}^2 \int_{\R^3} \agl[x]^{-1-2\epsilon} \int_D |x-y|^{-2} \dif{y} \dif{x}. \end{align*} By arguments similar to the ones used in the proof of Lemma \ref{lemma:RkBoundedR3-SchroEqu2018} we arrive at \begin{equation} \label{eq:Rksigma2Bounded-SchroEqu2018} \mathbb{E} (\nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}}(\sigma \dot{B}_x)}^2) \leq C_D < +\infty, \end{equation} for some constant $C_D$ depending on $D$ but not on $k$. By the H\"older inequality applied to the probability measure, (\ref{eq:Rksigma2Bounded-SchroEqu2018}) gives \begin{equation} \label{eq:RksigmaBoundedCD-SchroEqu2018} \mathbb{E} (\nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}}(\sigma \dot{B}_x)}) \leq [ \mathbb{E} ( \nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}}(\sigma \dot{B}_x)}^2 ) ]^{1/2} \leq \sq{C_D^{1/2}} < +\infty, \end{equation} for some constant $C_D$ independent of $k$. The inequality (\ref{eq:RksigmaBoundedCD-SchroEqu2018}) gives $${\mathcal{R}_{k}}(\sigma \dot B_x) \in L_{-1/2-\epsilon}^2 \quad \textrm{~a.s.~}.$$ By replacing $\R^3$ with $D$ and deleting all the terms $\agl[x]^{-1-2\epsilon}$ in the derivations above, one arrives at $\mathbb{E} \nrm[L^2(D)]{{\mathcal{R}_{k}}(\sigma \dot{B}_x)} < +\infty$. The proof is done. \end{proof} \subsection{The well-posedness of the \textbf{DP}} \label{subset:WellDefined-SchroEqu2018} For a particular realization of the random sample $\omega \in \Omega$, the term $\dot B_x(\omega)$, \jz{treated as a function of the spatial argument $x$, could be very rough. The roughness of this term could make these classical second-order elliptic PDEs theories invalid to \eqref{eq:1}.} Due to this reason, the notion of the {\em mild solution} is introduced for random PDEs (cf. \cite{bao2016inverse}). In what follows, we adopt the mild solution in our problem setting, and we show that this mild solution and the corresponding far-field pattern are well-posed in a proper sense. Reformulating \eqref{eq:1} into the Lippmann-Schwinger equation formally (cf. \cite{colton2012inverse}), we have \begin{equation} \label{eq:LippSch-SchroEqu2018} (I - {\mathcal{R}_{k}} V) u = \alpha \cdot u^i - {\mathcal{R}_{k}} f - {\mathcal{R}_{k}}(\sigma \dot B_x), \end{equation} where the term ${\mathcal{R}_{k}}(\sigma \dot B_x)$ is defined by (\ref{eq:RkSigmaBDefn-SchroEqu2018}). Recall that $u^{sc} = u - \alpha \cdot u^i$. From (\ref{eq:LippSch-SchroEqu2018}) we have \begin{equation} \label{eq:uscDefn-SchroEqu2018} (I - {\mathcal{R}_{k}} V) u^{sc} = \alpha {\mathcal{R}_{k}} V u^i - {\mathcal{R}_{k}} f - {\mathcal{R}_{k}}(\sigma \dot B_x). \end{equation} \begin{thm} \label{thm:MildSolUnique-SchroEqu2018} \sq{When $k > k^*$, there exists a unique stochastic process $u^{sc}(\cdot,\omega) \colon \R^3 \to \mathbb C$ such that $u^{sc}(x)$ satisfies \eqref{eq:uscDefn-SchroEqu2018} a.s.\,, and ${u^{sc}(\cdot,\omega) \in L_{-1/2-\epsilon}^2 \textrm{~a.s.~}}$for any $\epsilon\in\mathbb{R}_+$. Moreover, we have \begin{equation} \label{thm:SolPosed-SchroEqu2018} \nrm[L_{-1/2-\epsilon}^2]{u^{sc}(\cdot,\omega)} \lesssim \nrm[L_{1/2+\epsilon}^2]{\alpha V u^i} + \nrm[L_{1/2+\epsilon}^2]{f} + \nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}}(\sigma \dot B_x)}. \end{equation} Then we call $u(x) := u^{sc} + \alpha \cdot u^i(x)$ the {\em mild solution} to the random scattering system \eqref{eq:1}.} \end{thm} \begin{proof} \sq{By Lemmas \ref{lemma:RkBoundedR3-SchroEqu2018}, \ref{lemma:RkVBoundedR3-SchroEqu2018} and \ref{lemma:RkSigmaB-SchroEqu2018}, we see $$F := \alpha {\mathcal{R}_{k}} V u^i - {\mathcal{R}_{k}} f - {\mathcal{R}_{k}}(\sigma \dot B_x) \in L_{-1/2-\epsilon}^2.$$ Note that $k > k^*$, so the term $\sum_{j=0}^\infty ({\mathcal{R}_{k}} V)^j$ is well-defined, thus the term $\sum_{j=0}^\infty ({\mathcal{R}_{k}} V)^j F$ belongs to $L_{-1/2-\epsilon}^2$. Because $\sum_{j=0}^\infty ({\mathcal{R}_{k}} V)^j = (I - {\mathcal{R}_{k}} V)^{-1}$, we see $(I - {\mathcal{R}_{k}} V)^{-1} F \in L_{-1/2-\epsilon}^2$. Let $u^{sc} := (I - {\mathcal{R}_{k}} V)^{-1} F \in L_{-1/2-\epsilon}^2$, then $u^{sc}$ is the unique solution of \eqref{eq:uscDefn-SchroEqu2018}. That is, the existence of the mild solution is proved The uniqueness of the mild solution follows from the invertibility of the operator $(I - {\mathcal{R}_{k}} V)^{-1}$. From \eqref{eq:uscDefn-SchroEqu2018} and Lemmas \ref{lemma:AgmonEst-SchroEqu2018}-\ref{lemma:RkVBoundedR3-SchroEqu2018}, we have \begin{align*} \nrm[L_{-1/2-\epsilon}^2]{u^{sc}(\cdot,\omega)} & = \nrm[L_{-1/2-\epsilon}^2]{(I - {\mathcal{R}_{k}} V)^{-1} (\alpha {\mathcal{R}_{k}} V u^i - {\mathcal{R}_{k}} f - {\mathcal{R}_{k}}(\sigma \dot B_x))} \\ & \leq \sum_{j \geq 0} \nrm[\mathcal L(L_{-1/2-\epsilon}^2,L_{-1/2-\epsilon}^2)]{{\mathcal{R}_{k}} V}^j \cdot \nrm[L_{-1/2-\epsilon}^2]{\alpha {\mathcal{R}_{k}} V u^i - {\mathcal{R}_{k}} f - {\mathcal{R}_{k}}(\sigma \dot B_x)} \\ & \leq C ( \nrm[L_{-1/2-\epsilon}^2]{\alpha {\mathcal{R}_{k}} V u^i} + \nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}} f} + \nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}}(\sigma \dot B_x)}) \\ & \leq C ( \nrm[L_{1/2+\epsilon}^2]{\alpha V u^i} + \nrm[L_{1/2+\epsilon}^2]{f} + \nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}}(\sigma \dot B_x)} ). \end{align*} Therefore \eqref{thm:SolPosed-SchroEqu2018} is proved. The proof is complete.} \end{proof}
2,816
47,422
en
train
0.71.6
Next we show that the far-field pattern is well-defined in the $L^2$ sense. From \eqref{eq:uscDefn-SchroEqu2018} we derive that \begin{align*} u^{sc} & = (I - {\mathcal{R}_{k}} V)^{-1} \big( \alpha {\mathcal{R}_{k}} V u^i - {\mathcal{R}_{k}} f - {\mathcal{R}_{k}}(\sigma \dot B_x) \big) \\ & = {\mathcal{R}_{k}} (I - V {\mathcal{R}_{k}})^{-1} (\alpha V u^i - f - \sigma \dot B_x). \end{align*} Therefore, we define the far-field pattern of the scattered wave $u^{sc}(x,k,d,\omega)$ formally in the following manner, \begin{equation} \label{eq:uInftyDefn-SchroEqu2018} u^\infty(\hat x,k,d,\omega) := \frac 1 {4\pi} \int_D e^{-ik\hat x \cdot y} (I - V {\mathcal{R}_{k}})^{-1} (\alpha V u^i - f - \sigma \dot B_y) \dif{y}, \quad \hat x \in \mathbb{S}^2. \end{equation} The another result concerning the \textbf{DP} is Theorem \ref{thm:FarFieldWellDefined-SchroEqu2018}, showing that $u^\infty(\hat x,k,d,\omega)$ is well-defined. \begin{thm} \label{thm:FarFieldWellDefined-SchroEqu2018} Define the far-field pattern of the mild solution as in \eqref{eq:uInftyDefn-SchroEqu2018}. When \jz{$k > k^*$}, there is a subset \jz{$\Omega_0 \subset \Omega$} with zero measure $\mathbb P (\Omega_0) = 0$, such that there holds \[ u^\infty(\hat x,k,d,\omega) \in L^2(\mathbb{S}^2),\ \ {\,\forall\,} \omega \in \Omega \backslash \Omega_0. \] \end{thm} \begin{proof}[Proof of Theorem \ref{thm:FarFieldWellDefined-SchroEqu2018}] \jz{By Lemma \ref{lemma:RkVBounded-SchroEqu2018}, $$\nrm[\mathcal{L}(L^2(D), L^2(D))]{V {\mathcal{R}_{k}}} \leq C k^{-1} < 1$$ when $k$ is sufficiently large. Therefore we have, \begin{align} |u^\infty(\hat x)|^2 & \lesssim |D|^2 \cdot \int_D |\sum_{j \geq 0} (V {\mathcal{R}_{k}})^j (\alpha V u^i - f)|^2 \dif{y} \nonumber\\ & \ \ \ \ + \big| \int_D e^{-ik\hat x \cdot y} \sum_{j \geq 1} (V {\mathcal{R}_{k}})^j (\sigma \dot B_y) \dif{y} \big|^2 \nonumber\\ & \ \ \ \ + \big| \int_D e^{-ik\hat x \cdot y} \sigma \dot B_y \dif{y} \big|^2 \nonumber\\ & =: f_1(\hat x, k) + f_2(\hat x, k,\omega) + f_3(\hat x, k,\omega). \label{eq:a1} \end{align} We next derive estimates on each term $f_j ~(j=1,2,3)$ defined in \eqref{eq:a1}. For $f_1$, we have \begin{align} f_1(\hat x, k) & \leq C |D|^2 \cdot ( \sum_{j \geq 0} k^{-j} \nrm[L^2(D)]{\alpha V u^i - f} )^2 \leq C |D|^2 (\nrm[L^2(D)]{V} + \nrm[L^2(D)]{f})^2. \label{eq:f1-SchroEqu2018} \end{align} For $f_2$, by utilizing \eqref{eq:VRkbdd-SchroEqu2018}, one can compute \begin{equation} \label{eq:f2Inter-SchroEqu2018} f_2(\hat x, k, \omega) \leq C \int_D |\sum_{j \geq 0} (V {\mathcal{R}_{k}})^j V {\mathcal{R}_{k}}(\sigma \dot B_y)|^2 \dif{y} \leq C \big( \sum_{j \geq 0} k^{-j} \nrm[L^2(D)]{V {\mathcal{R}_{k}}(\sigma \dot B_y)} \big)^2. \end{equation} By virtue of the boundedness of the support of $V$, we can continue \eqref{eq:f2Inter-SchroEqu2018} as \begin{equation} \label{eq:f2-SchroEqu2018} f_2(\hat x, k, \omega) \leq C \big( \sum_{j \geq 0} k^{-j} \nrm[L_{-1/2-\epsilon}^2]{V {\mathcal{R}_{k}}(\sigma \dot B_y)} \big)^2 \leq C_V \nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}}(\sigma \dot B_y)}^2. \end{equation}} By (\ref{eq:ItoIso-SchroEqu2018}), the expectation of $f_3(\hat x, k, \omega)$ is \begin{equation} \label{eq:f3-SchroEqu2018} \mathbb{E} f_3(\hat x, k, \omega) = \mathbb{E} |\agl[\dot B_y, e^{-ik\hat x \cdot y} \sigma(y)]|^2 = \int_D |\sigma(y)|^2 \dif{y}. \end{equation} Combining \eqref{eq:Rksigma2Bounded-SchroEqu2018}, \eqref{eq:a1}-\eqref{eq:f1-SchroEqu2018} and \eqref{eq:f2-SchroEqu2018}-\eqref{eq:f3-SchroEqu2018}, we arrive at \begin{align} \mathbb E |u^\infty(\hat x)|^2 & \leq C |D|^2 (\nrm[L^2(D)]{V} + \nrm[L^2(D)]{f})^2 + C_V \mathbb E (\nrm[L_{-1/2-\epsilon}^2]{{\mathcal{R}_{k}}(\sigma \dot B_y)}^2) + \int_D |\sigma(y)|^2 \dif{y} \nonumber\\ & \leq C < +\infty \label{eq:a2-SchroEqu2018} \end{align} for some positive constant $C$. From \eqref{eq:a2-SchroEqu2018} we arrive at \begin{equation} \label{eq:a3-SchroEqu2018} \mathbb E \int_{\mathbb{S}^2} |u^\infty(\hat x)|^2 \dif{S} \leq C < +\infty. \end{equation} Our conclusion follows from \eqref{eq:a3-SchroEqu2018} immediately. \end{proof} \sq{\section{Some asymptotic estimates} \label{sec:AsyEst-SchroEqu2018}} \sq{This section is devoted to some preparations of the recovery of the variance function. To recovery $\sigma^2(x)$, only the passive far-field patterns are utilized. Therefore, throughout this section, the $\alpha$ in \eqref{eq:1} is set to be 0. Motivated by \cite{caro2016inverse}, our recovery formula of the variance function is of the form \begin{equation} \label{eq:example-SchroEqu2018} \frac 1 {K} \int_{K}^{2K} \overline{u^\infty(\hat x,k,\omega)} \cdot u^\infty(\hat x,k+\tau,\omega) \dif{k}. \end{equation}} \sq{After expanding $u^\infty(\hat x,k,\omega)$ in the form of Neumann series, there will be several crossover terms in \eqref{eq:example-SchroEqu2018} which decay in different rates in terms of $K$. In this section, we focus on the asymptotic estimates of these terms, which pave the way to the recovery of $\sigma^2(x)$. The recovery of $\sigma^2(x)$ is presented in the next section.} \sq{To start out, we write \begin{equation} \label{eq:u1-SchroEqu2018} u_1^\infty(\hat x,k,\omega) := u^\infty(\hat x,k,\omega) - \mathbb{E} u^\infty(\hat x,k). \end{equation} Note that $u_1^\infty$ is independent of the incident direction $d$. Assume that $k > k^*$, then the operator $(I - {\mathcal{R}_{k}} V)^{-1}$ has the Neumann expansion $\sum_{j=0}^{+\infty} ({\mathcal{R}_{k}} V)^j$. By \eqref{eq:uInftyDefn-SchroEqu2018} and \eqref{eq:u1-SchroEqu2018} we have \begin{align} u_1^\infty(\hat x,k,\omega) \ = & \ \frac {-1} {4\pi} \sum_{j=0}^{+\infty} \int_D e^{-ik\hat x \cdot y} ({\mathcal{R}_{k}} V)^j (\sigma \dot B_y) \dif{y}, \quad \hat x \in \mathbb{S}^2 \nonumber\\ := & \ \frac {-1} {4\pi} \big[ F_0(k,\hat x) + F_1(k,\hat x) \big], \label{eq:u1InftyDefn-SchroEqu2018} \end{align} where \begin{equation} \label{eq:Fjkx-SchroEqu2018} \left\{\begin{aligned} F_0(k,\hat x,\omega) & := \int_D e^{-ik \hat x \cdot y} (\sigma \dot{B}_y) \dif{y}, \\ F_1(k,\hat x,\omega) & := \sum_{j \geq 1} \int_D e^{-ik \hat x \cdot y} (V {\mathcal{R}_{k}})^j (\sigma \dot{B}_y) \dif{y}. \end{aligned}\right. \end{equation} Meanwhile, the expectation of the far-field pattern $\mathbb E u^\infty$ is \begin{equation} \label{eq:u2InftyDefn-SchroEqu2018} \mathbb E u^\infty(\hat x,k) = \frac {-1} {4\pi} \int_D e^{-ik\hat x \cdot y} (I - V {\mathcal{R}_{k}})^{-1} (f) \dif{y}, \quad \hat x \in \mathbb{S}^2. \end{equation}} \sq{ \begin{lem} \label{lemma:FarFieldGoToZero-SchroEqu2018} We have \[ \lim_{k \to +\infty} |\mathbb E u^\infty(\hat x,k)| = 0 \quad \text{uniformly in } \hat x \in \mathbb{S}^2. \] \end{lem} \begin{proof}[Proof of Lemma \ref{lemma:FarFieldGoToZero-SchroEqu2018}] Due to the fact that $f \in L^\infty(D) \subset L^2(D)$, we know \begin{equation} \label{eq:fApprox-SchroEqu2018} {\,\forall\,} \epsilon > 0, {\,\exists\,} \varphi_\epsilon \in \mathscr{D}(D), \textrm{~s.t.~} \nrm[L^2(D)]{f-\varphi_\epsilon} < \epsilon / (2 |D|^{\frac 1 2}). \end{equation} Recall that $k > k^*$, so $(I - V {\mathcal{R}_{k}})^{-1}$ equals to $I + \sum_{j=1}^{+\infty} (V{\mathcal{R}_{k}})^j$. By \eqref{eq:fApprox-SchroEqu2018} and Lemma \ref{lemma:RkVBounded-SchroEqu2018} and utilizing the stationary phase lemma, one can deduce as follows, \begin{align} |\mathbb E u^\infty(\hat x,k)| & \lesssim \big| \int_D e^{-ik \hat x \cdot y} \varphi_\epsilon(y) \dif{y} \big| + \big| \int_D e^{-ik \hat x \cdot y} \big[ f(y) - \varphi_\epsilon(y) + \big( \sum_{j \geq 1} (V{\mathcal{R}_{k}})^j f \big) (y) \big] \dif{y} \big| \nonumber\\ & \lesssim \big| k^{-2} \int_D e^{-ik \hat x \cdot y} \cdot \Delta \varphi_\epsilon(y) \dif{y} \big| + |D|^{\frac 1 2} \cdot \nrm[L^2(D)]{f - \varphi_\epsilon + \sum_{j \geq 1} (V{\mathcal{R}_{k}})^j f } \nonumber\\ & \leq k^{-2} \cdot |D|^{\frac 1 2} \cdot \nrm[L^2(D)]{\Delta \varphi_\epsilon} + |D|^{\frac 1 2} \cdot \big( \epsilon/(2 |D|^{\frac 1 2}) + C \sum_{j \geq 1} k^{-j} \nrm[L^2(D)]{f} \big) \nonumber\\ & = k^{-2} \cdot |D|^{\frac 1 2} \nrm[L^2(D)]{\Delta \varphi_\epsilon} + \epsilon/2 + C (k-1)^{-1} \cdot \nrm[L^2(D)]{f}. \label{eq:u2InftyEst-SchroEqu2018} \end{align} Write $\mathcal K := \max\{ K_0, \frac 2 {\sqrt{\epsilon}} |D|^{\frac 1 4} \nrm[L^2(D)]{\Delta \varphi_\epsilon}^{\frac 1 2}, 1 + \frac {4C} \epsilon \nrm[L^2(D)]{f} \}$. From (\ref{eq:u2InftyEst-SchroEqu2018}) we have $${\,\forall\,} k > \mathcal K, \quad |\mathbb E u^\infty(\hat x,k)| < \frac \epsilon 2 + \frac \epsilon 4 + \frac \epsilon 4 = \epsilon, \quad \text{uniformly for } \forall \hat x \in \mathbb{S}^2.$$ Since the $\epsilon$ is taken arbitrarily, the conclusion follows. \end{proof}
3,996
47,422
en
train
0.71.7
} \sq{By substituting \eqref{eq:u1-SchroEqu2018}-\eqref{eq:u2InftyDefn-SchroEqu2018} into \eqref{eq:example-SchroEqu2018}, we obtain several crossover terms among $F_0$, $F_1$ and $\mathbb E u^\infty$. The asymptotic estimates of these crossover terms are the main purpose of Sections \ref{subsec:AELeading-SchroEqu2018} and \ref{subsec:AEHigher-SchroEqu2018}. Section \ref{subsec:AELeading-SchroEqu2018} focuses on the estimate of the leading order term while the estimates of the higher order terms are presented in Section \ref{subsec:AEHigher-SchroEqu2018}.} \subsection{Asymptotic estimates of the leading order term} \label{subsec:AELeading-SchroEqu2018} \jz{Lemma \ref{lemma:LeadingTermErgo-SchroEqu2018} below is the asymptotic estimate of the crossover leading order term. By utilizing the ergodicity, the result of Lemma \ref{lemma:LeadingTermErgo-SchroEqu2018} is also statistically stable. To prove Lemma \ref{lemma:LeadingTermErgo-SchroEqu2018}, we need Lemmas \ref{lem:asconverg-SchroEqu2018}, \ref{lem:IsserlisThm-SchroEqu2018} and \ref{lemma:LeadingTermTechnical-SchroEqu2018}. Lemma \ref{lem:asconverg-SchroEqu2018} is the probabilistic foundation of our single-realization recovery result, and Lemma \ref{lem:IsserlisThm-SchroEqu2018} is called Isserlis' Theorem. In order to keep our arguments flowing, we postpone Lemma \ref{lemma:LeadingTermTechnical-SchroEqu2018} until we finish Lemma \ref{lemma:LeadingTermErgo-SchroEqu2018}.} \begin{lem} \label{lem:asconverg-SchroEqu2018} Assume $X$ and $X_n ~(n=1,2,\cdots)$ be complex-valued random variables, then $$X_n \to X \textrm{~a.s.~} \quad\text{if and only if}\quad \lim_{K_0 \to +\infty} P \big( \bigcup_{j \geq K_0} \{ |X_j - X| \geq \epsilon \} \big) = 0 ~{\,\forall\,} \epsilon > 0.$$ \end{lem} \sq{The proof of Lemma \ref{lem:asconverg-SchroEqu2018} can be found in [\citen{dudley2002real}, Lemma 9.2.4].} \begin{lem}[Isserlis' Theorem \cite{Michalowicz2009}] \label{lem:IsserlisThm-SchroEqu2018} Suppose ${(X_{1},\dots, X_{2n})}$ is a zero-mean multi-variate normal random vector, then \[ \mathbb{E} (X_1 X_2 \cdots X_{2n}) = \sum\prod \mathbb{E} (X_i X_j), \quad \mathbb{E} (X_1 X_2 \cdots X_{2n-1}) = 0. \] Specially, \[ \mathbb{E} (\,X_{1}X_{2}X_{3}X_{4}\,) = \mathbb{E} (X_{1}X_{2})\, \mathbb{E} (X_{3}X_{4}) + \mathbb{E} (X_{1}X_{3})\, \mathbb{E} (X_{2}X_{4}) + \mathbb{E} (X_{1}X_{4})\, \mathbb{E} (X_{2}X_{3}). \] \end{lem} \jz{The proof of Lemma \ref{lem:IsserlisThm-SchroEqu2018} can be found in \cite{Michalowicz2009}.} In what follows, $\widehat{\varphi}$ denotes the Fourier transform of the function $\varphi$ defined as \begin{equation*} \widehat{\varphi}(\xi) := (2\pi)^{-n/2} \int_{\R^3} e^{-i x \cdot \xi} \varphi(x) \dif{x}, \quad \xi \in \R^n. \end{equation*} For the notational convenience, we use ``$\{K_j\} \in P(t)$'' to mean that the sequence $\{K_j\}_{j \in \mathbb{N}^+}$ satisfies $K_j \geq C j^t ~(j \in \mathbb{N}^+)$ for some fixed constant $C > 0$. Throughout the following context, $\gamma$ stands for any fixed positive real number. Lemma \ref{lemma:LeadingTermErgo-SchroEqu2018} gives the asymptotic estimates of the crossover leading order term. \begin{lem} \label{lemma:LeadingTermErgo-SchroEqu2018} Write \begin{equation*} X_{0,0}(K,\tau,\hat x,\omega) = \frac 1 K \int_K^{2K} \overline{F_0(k,\hat x,\omega)} \cdot F_0(k+\tau,\hat x,\omega) \dif{k}. \end{equation*} Assume $\{K_j\} \in P(2+\gamma)$, then for any $\tau > 0$, we have \begin{equation*} \lim_{j \to +\infty} X_{0,0}(K_j,\tau,\hat x,\omega) = (2\pi)^{3/2} \widehat{\sigma^2} (\tau \hat x) \quad \textrm{~a.s.~}. \end{equation*} \end{lem} We may denote $X_{0,0}(K,\tau,\hat x,\omega)$ as $X_{0,0}$ for short if it is clear in the context. \begin{proof}[Proof of Lemma \ref{lemma:LeadingTermErgo-SchroEqu2018}] We have \begin{align} & \ \mathbb{E} \big( \overline{F_0(k,\hat x,\omega)} F_0(k+\tau,\hat x,\omega) \big) \nonumber\\ = & \ \mathbb{E} \big( \int_{D_y} e^{ik \hat x \cdot y} \sigma(y) \dif{B_y} \cdot \int_{D_z} e^{-i(k+\tau) \hat x \cdot z} \sigma(z) \dif{B_z} \big) \nonumber\\ = & \ \int_{D} e^{ik \hat x \cdot y} e^{-i(k+\tau) \hat x \cdot y} \sigma(y) \sigma(y) \dif{y} = (2\pi)^{3/2}\, \widehat{\sigma^2} (\tau \hat x). \label{eq:I0-SchroEqu2018} \end{align} From \eqref{eq:I0-SchroEqu2018} we conclude that \begin{align*} \mathbb{E} ( X_{0,0} ) = \frac 1 K \int_K^{2K} \mathbb{E} \big( \overline{F_0(k,\hat x,\omega)} F_0(k+\tau,\hat x,\omega) \big) \dif{k} = (2\pi)^{3/2} \widehat{\sigma^2} (\tau \hat x). \end{align*} By Isserlis' Theorem and \eqref{eq:I0-SchroEqu2018}, and note that $\overline{F_j(k,\hat x,\omega)} = F_j(-k,\hat x,\omega)$, one can compute \begin{align} & \mathbb{E} \big( | X_{0,0} - (2\pi)^{3/2} \widehat{\sigma^2} (\tau \hat x) |^2 \big) \nonumber\\ = & \frac 1 {K^2} \int_K^{2K} \int_K^{2K} \mathbb{E} \Big( \overline{F_0(k_1,\hat x,\omega)} F_0(k_1+\tau,\hat x,\omega) F_0(k_2,\hat x,\omega) \overline{F_0(k_2+\tau,\hat x,\omega)} \Big) \dif{k_1} \dif{k_2} \nonumber\\ & - (2\pi)^3 |\widehat{\sigma^2} \big( \tau \hat x \big)|^2 - (2\pi)^3 |\widehat{\sigma^2} \big( \tau \hat x \big)|^2 + (2\pi)^3 |\widehat{\sigma^2} \big( \tau \hat x \big)|^2 \hspace{1.5cm} (\text{by } \eqref{eq:I0-SchroEqu2018}) \nonumber\\ = & \frac 1 {K^2} \int_K^{2K} \int_K^{2K} \mathbb{E} \big( \overline{F_0(k_1,\hat x,\omega)} F_0(k_1+\tau,\hat x,\omega) \big) \cdot \mathbb{E} \big( F_0(k_2,\hat x,\omega) \overline{F_0(k_2+\tau,\hat x,\omega)} \big) \nonumber\\ & + \mathbb{E} \big( \overline{F_0(k_1,\hat x,\omega)} F_0(k_2,\hat x,\omega) \big) \cdot \mathbb{E} \big( F_0(k_1+\tau,\hat x,\omega) \overline{F_0(k_2+\tau,\hat x,\omega)} \big) \nonumber\\ & + \mathbb{E} \big( \overline{F_0(k_1,\hat x,\omega)} F_0(-k_2-\tau,\hat x,\omega) \big) \cdot \mathbb{E} \big( \overline{F_0(-k_1-\tau,\hat x,\omega)} F_0(k_2,\hat x,\omega) \big) \dif{k_1} \dif{k_2} \nonumber\\ & - (2\pi)^3 |\widehat{\sigma^2} \big( \tau \hat x \big)|^2 \nonumber\\ = & \frac {(2\pi)^3} {K^2} \int\limits_K^{2K} \int\limits_K^{2K} |\widehat{\sigma^2}((k_2 - k_1) \hat x)|^2 \dif{k_1} \dif{k_2} + \frac {(2\pi)^3} {K^2} \int\limits_K^{2K} \int\limits_K^{2K} |\widehat{\sigma^2}((k_1 + k_2 + \tau) \hat x)|^2 \dif{k_1} \dif{k_2}. \label{eq:X00Square-SchroEqu2018} \end{align} \jz{ Note that $|\widehat{\sigma^2} \big( (k_1 - k_2) \hat x \big)| = |\widehat{\sigma^2} \big( -(k_1 - k_2) \hat x \big)|$.} Combining (\ref{eq:X00Square-SchroEqu2018}) and Lemma \ref{lemma:LeadingTermTechnical-SchroEqu2018}, we have \begin{equation} \label{eq:X00Bdd-SchroEqu2018} \mathbb{E} \big( | X_{0,0} - (2\pi)^{3/2} \widehat{\sigma^2} (\tau \hat x) |^2 \big) = \mathcal{O}(K^{-1/2}), \quad K \to +\infty. \end{equation} For any integer $K_0 > 0$, by Chebyshev's inequality and (\ref{eq:X00Bdd-SchroEqu2018}) we have \begin{align} & P \big( \bigcup_{j \geq K_0} \{ | X_{0,0}(K_j) - (2\pi)^{3/2} \widehat{\sigma^2} (\tau \hat x) | \geq \epsilon \} \big) \leq \frac 1 {\epsilon^2} \sum_{j \geq K_0} \mathbb{E} \big( | X_{0,0}(K_j) - (2\pi)^{3/2} \widehat{\sigma^2} (\tau \hat x) |^2 \big) \nonumber\\ \lesssim & \frac 1 {\epsilon^2} \sum_{j \geq K_0} K_j^{-1/2} = \frac 1 {\epsilon^2} \sum_{j \geq K_0} j^{-1-\gamma/2} \leq \frac 1 {\epsilon^2} \int_{K_0}^{+\infty} (t-1)^{-1-\gamma/2} \dif{t} = \frac 2 {\epsilon^2 \gamma} (K_0-1)^{-\gamma/2}. \label{eq:PX00Epsilon-SchroEqu2018} \end{align} Here $X_{0,0}(K_j)$ stands for $X_{0,0}(K_j, \tau, \hat x,\omega)$. By Lemma \ref{lem:asconverg-SchroEqu2018}, formula (\ref{eq:PX00Epsilon-SchroEqu2018}) implies that for any fixed $\tau \geq 0$ and fixed $\hat x \in \mathbb{S}^2$, we have $$X_{0,0}(K_j,\tau,\hat x,\omega) \to (2\pi)^{3/2} \widehat{\sigma^2} (\tau \hat x)\quad \textrm{~a.s.~}.$$ The proof is done. \end{proof}
3,662
47,422
en
train
0.71.8
Lemma \ref{lemma:LeadingTermTechnical-SchroEqu2018} plays a critical role in the estimates of the leading order term. \begin{lem} \label{lemma:LeadingTermTechnical-SchroEqu2018} \jz{Assume that} $\tau \geq 0$ is fixed, then $\exists K_0 > \tau$, and $K_0$ is independent of $\hat x$, such that for all $K > K_0$, we have the following estimates: \begin{align} \frac {(2\pi)^3} {K^2} \int_K^{2K} \int_K^{2K} \big| \widehat{\sigma^2}((k_1 - k_2) \hat x) \big|^2 \dif{k_1} \dif{k_2} & \leq CK^{-1/2}, \label{eq:F0F0TermOne-SchroEqu2018} \\ \frac {(2\pi)^3} {K^2} \int_K^{2K} \int_K^{2K} \big| \widehat{\sigma^2}((k_1 + k_2 + \tau) \hat x) \big|^2 \dif{k_1} \dif{k_2} & \leq CK^{-1/2}, \label{eq:F0F0TermTwo-SchroEqu2018} \end{align} for some constant $C$ independent of $\tau$ and $\hat x$. \end{lem} \begin{proof}[Proof of Lemma \ref{lemma:LeadingTermTechnical-SchroEqu2018}] Note that for every $x \in \R^3$, we have \begin{equation*} |\widehat{\sigma^2}(x)|^2 \simeq \big| \int_{\R^3} e^{-i x \cdot \xi} \sigma^2(\xi) \dif{\xi} \big|^2 \leq \big( \int_{\R^3} |\sigma^2(\xi)| \dif{\xi} \big)^2 \leq \nrm[L^\infty(D)]{\sigma}^4 \cdot |D|^2. \end{equation*} To conclude (\ref{eq:F0F0TermOne-SchroEqu2018}), we make a change of variable, \begin{equation*} \left\{\begin{aligned} s & = k_1 - k_2, \\ t & = k_2. \end{aligned}\right. \end{equation*} Write \jz{$Q = \{(s,t) \in \R^2 \,\big|\, K \leq s+t \leq 2K,\, K \leq t \leq 2K \}$}. $Q$ is illustrated as in Figure \ref{fig:D-SchroEqu2018}. \begin{figure} \caption{Illustration of $Q$} \label{fig:D-SchroEqu2018} \end{figure} Recall that $\mathop{\rm supp} \sigma \subseteq D$, so we have \begin{align} & \frac 1 {K^2} \int_K^{2K} \int_K^{2K} |\widehat{\sigma^2}((k_1 - k_2) \hat x)|^2 \dif{k_1} \dif{k_2} = \frac 1 {K^2} \iint_Q \big| \widehat{\sigma^2}(s \hat x) \big|^2 \dif{s} \dif{t} \nonumber\\ = \ & \frac 1 {K^2} \int_{-K}^0 (K+s) |\widehat{\sigma^2}(s \hat x)|^2 \dif{s} + \frac 1 {K^2} \int_0^{K} (K-s) |\widehat{\sigma^2}(s \hat x)|^2 \dif{s} \nonumber\\ \simeq \ & \int_0^1 \Big( \int_D e^{-iKs \hat x \cdot y} \sigma^2(y) \dif{y} \cdot \int_D e^{iKs \hat x \cdot z} \sigma^2(z) \dif{z} \Big) \dif{s} \nonumber\\ = \ & \int_{(D \times D) \backslash E_\epsilon} \Big( \int_0^1 e^{iK(\hat x \cdot z - \hat x \cdot y)s} \dif{s} \Big) \sigma^2(y) \sigma^2(z) \dif{y} \dif{z} \nonumber\\ & \quad + \int_{E_\epsilon} \Big( \int_0^1 e^{iK(\hat x \cdot z - \hat x \cdot y)s} \dif{s} \Big) \sigma^2(y) \sigma^2(z) \dif{y} \dif{z} \nonumber\\ =: & A_1 + A_2, \label{eq:sigma2Inter-SchroEqu2018} \end{align} where $E_\epsilon := \{ (y,z) \in D \times D ; |\hat x \cdot z - \hat x \cdot y| < \epsilon \}$. We first estimate $A_1$, \begin{align} |A_1| & = \Big| \int_{(D \times D) \backslash E_\epsilon} \Big( \int_0^1 e^{iK(\hat x \cdot z - \hat x \cdot y)s} \dif{s} \Big) \sigma^2(y) \sigma^2(z) \dif{y} \dif{z} \Big| \nonumber\\ & \leq \int_{(D \times D) \backslash E_\epsilon} \Big| \frac {e^{iK(\hat x \cdot z - \hat x \cdot y)} - 1} {iK(\hat x \cdot z - \hat x \cdot y)} \sigma^2(y) \sigma^2(z) \Big| \dif{y} \dif{z} \nonumber\\ & \leq \frac 2 {K\epsilon} \nrm[L^\infty(D)]{\sigma}^4 \int_{D \times D} 1 \dif{y} \dif{z} = \frac {2|D|^2} {K\epsilon} \nrm[L^\infty(D)]{\sigma}^4. \label{eq:sigma2InterA1-SchroEqu2018} \end{align} \jz{Recall that $\text{diam}\,D < +\infty$ and that the problem setting is in $\R^3$. We can estimate $A_2$ as} \begin{align} |A_2| & \leq \nrm[L^\infty(D)]{\sigma}^4 \int_{E_\epsilon} 1 \dif{y} \dif{z} \nonumber\\ & = \nrm[L^\infty(D)]{\sigma}^4 \int_D \big( \int_{y \in D \,,\, |\hat x \cdot z - \hat x \cdot y| < \epsilon} 1 \dif{y} \big) \dif{z} \nonumber\\ & \leq \nrm[L^\infty(D)]{\sigma}^4 \int_D 2\epsilon (\text{Diam}\,D)^2 \dif{z} \nonumber\\ & \leq 2\nrm[L^\infty(D)]{\sigma}^4 (\text{Diam}\,D)^2 |D| \cdot \epsilon. \label{eq:sigma2InterA2-SchroEqu2018} \end{align} Set $\epsilon = K^{-1/2}$. By \eqref{eq:sigma2Inter-SchroEqu2018}-\eqref{eq:sigma2InterA2-SchroEqu2018}, we arrive at $$\frac 1 {K^2} \int\limits_K^{2K} \int\limits_K^{2K} |\widehat{\sigma^2}((k_1 - k_2) \hat x)|^2 \dif{k_1} \dif{k_2} \leq C K^{-1/2},$$ for some constant $C$ independent of $\hat x$. Now we prove \eqref{eq:F0F0TermTwo-SchroEqu2018}. Similarly, we make a change of variable: \begin{equation*} \left\{\begin{aligned} s & = k_1 + k_2 + \tau, \\ t & = k_2. \end{aligned}\right. \end{equation*} Write \jz{$Q' = \{(s,t) \in \R^2 \,\big|\, K \leq s-t-\tau \leq 2K,\, K \leq t \leq 2K \}$}. One can compute \begin{align*} & \frac 1 {K^2} \int_K^{2K} \int_K^{2K} | \widehat{\sigma^2}((k_1 + k_2 + \tau) \hat x) |^2 \dif{k_1} \dif{k_2} = \frac 1 {K^2} \iint_{Q'} | \widehat{\sigma^2}(s \hat x) |^2 \dif{s} \dif{s} \\ = & \frac 1 {K^2} \int_{2K+\tau}^{3K+\tau} (s-2K-\tau) | \widehat{\sigma^2}(s \hat x) |^2 \dif{s} + \frac 1 {K^2} \int_{3K+\tau}^{4K+\tau} (4K+\tau-s) | \widehat{\sigma^2}(s \hat x) |^2 \dif{s} \\ \leq & \frac 2 {K} \int_{2K-\tau}^{2K+\tau} | \widehat{\sigma^2}(s \hat x) |^2 \dif{s} = 2 \int_{2+\tau/K}^{4+\tau/K} | \widehat{\sigma^2}(Ks \hat x) |^2 \dif{s}. \end{align*} Thus when $K > \tau$, \begin{equation} \label{eq:sigma2InterTau-SchroEqu2018} \frac 1 {K^2} \int_K^{2K} \int_K^{2K} | \widehat{\sigma^2}((k_1 + k_2 + \tau) \hat x) |^2 \dif{k_1} \dif{k_2} \leq 2 \int_2^5 | \widehat{\sigma^2}(Ks \hat x) |^2 \dif{s}. \end{equation} Following the same manner as in \eqref{eq:sigma2Inter-SchroEqu2018}-\eqref{eq:sigma2InterA2-SchroEqu2018}, from \eqref{eq:sigma2InterTau-SchroEqu2018} we arrive at \eqref{eq:F0F0TermTwo-SchroEqu2018}. The proof is done. \end{proof}
2,876
47,422
en
train
0.71.9
\subsection{Asymptotic estimates of higher order terms} \label{subsec:AEHigher-SchroEqu2018} The asymptotic estimates of the higher order terms are presented in Lemma \ref{lemma:HOT-SchroEqu2018}. \begin{lem} \label{lemma:HOT-SchroEqu2018} For every $\hat x_1$, $\hat x_2 \in \mathbb{S}^2$ and every $k_1$, $k_2 \geq k$, we have the following estimates ($j = 0,1$) as $k \to +\infty$, \begin{align} \big| \mathbb{E} \big( \overline{F_j(k_1,\hat x_1,\omega)} \cdot F_1(k_2,\hat x_2,\omega) \big) \big| & = \mathcal{O}(k^{-1}), \label{eq:hotFjF1-SchroEqu2018}\\ \big| \mathbb{E} \big( F_j(k_1,\hat x_1,\omega) \cdot F_1(k_2,\hat x_2,\omega) \big) \big| & = \mathcal{O}(k^{-1}). \label{eq:hotFjF1Conju-SchroEqu2018} \end{align} \end{lem} \begin{proof}[Proof of Lemma \ref{lemma:HOT-SchroEqu2018}] The proof of formulas \eqref{eq:hotFjF1Conju-SchroEqu2018} is similar to that of \eqref{eq:hotFjF1-SchroEqu2018}, so we only present the proof of \eqref{eq:hotFjF1-SchroEqu2018}. In this proof, we may drop the arguments $k$, $\hat x$ or $\omega$ from $F_j$ if it is clear in the context. For the notational convenience, we write \begin{align*} G_j(k,\hat x,\omega) & := \int_D e^{-ik \hat x \cdot y} (V {\mathcal{R}_{k}})^j (\sigma \dot{B}_y) \dif{y}, \\ r_j(k,\hat x,\omega) & := \sum_{s \geq j} G_s(k,\hat x,\omega), \end{align*} for $j = 0,1,\cdots$. To prove \eqref{eq:hotFjF1-SchroEqu2018} for the case where $j = 0$, we first show that \begin{equation} \label{eq:hotF0Fj-SchroEqu2018} \mathbb{E} \big( \overline{G_0(k_1,\hat x_1,\omega)} \cdot G_j(k_2,\hat x_2,\omega) \big) = \int_D e^{-ik_2 \hat x_2 \cdot z} (V \mathcal{R}_{k_2})^j \big( e^{ik_1 \hat x_1 \cdot (\cdot)} \sigma^2 \big) \dif{z}, \quad j \geq 1. \end{equation} This can be seen from the following computation \begin{align} & \ \mathbb{E} \big( \overline{G_0(k_1,\hat x_1,\omega)} \cdot G_j(k_2,\hat x_2,\omega) \big) \nonumber\\ = & \ \mathbb{E} \big( \int_D e^{ik_1 \hat x_1 \cdot y} \sigma(y) \dif{B_y} \cdot \int_D \big[ e^{-ik_2 \hat x_2 \cdot z} (V \mathcal{R}_{k_2})^{j-1} ( V(\cdot) \int_{D_s} \Phi(\cdot,s) \sigma(s) \dif{B_s} ) \big] \dif{z} \big) \nonumber\\ = & \int_D e^{-ik_2 \hat x_2 \cdot z} (V \mathcal{R}_{k_2})^{j-1} \Big\{ V(\cdot) \,\mathbb{E} \big[ \int_{D_y} e^{ik_1 \hat x_1 \cdot y} \sigma(y) \dif{B_y} \cdot \int_{D_s} \Phi(\cdot,s) \sigma(s) \dif{B_s} \big] \Big\} \dif{z} \nonumber\\ = & \int_D e^{-ik_2 \hat x_2 \cdot z} (V \mathcal{R}_{k_2})^{j-1} \big( V(\cdot) \mathcal{R}_{k_2}(e^{ik_1 \hat x_1 \cdot (\cdot)} \sigma^2) \big) \dif{z} \nonumber\\ = & \int_D e^{-ik_2 \hat x_2 \cdot z} (V \mathcal{R}_{k_2})^j ( e^{ik_1 \hat x_1 \cdot (\cdot)} \sigma^2 ) \dif{z}. \label{eq:hotF0FjInter-SchroEqu2018} \end{align} From \eqref{eq:hotF0FjInter-SchroEqu2018}, equality \eqref{eq:hotF0Fj-SchroEqu2018} is proved. Using \eqref{eq:hotF0Fj-SchroEqu2018} and Lemma \ref{lemma:RkVBounded-SchroEqu2018}, we have \begin{align*} & \ \big| \mathbb{E} \big( \overline{F_0(k_1,\hat x_1,\omega)} \cdot F_1(k_2,\hat x_2,\omega) \big) \big| \nonumber\\ \leq & \ \sum_{j \geq 1} \big| \mathbb{E} \big( G_0(k_1,\hat x_1,\omega) \cdot \overline{G_j(k_2,\hat x_2,\omega)} \big) \big| \nonumber\\ = & \ \sum_{j \geq 1} \Big| \int_D e^{-ik_2 \hat x_2 \cdot z} (V \mathcal{R}_{k_2})^j \big( e^{ik_1 \hat x_1 \cdot (\cdot)} \sigma^2 \big) \dif{z} \Big| \nonumber\\ \leq & \ |D|^{1/2} \cdot \sum_{j \geq 1} \nrm[L^2(D)]{ (V \mathcal{R}_{k_2})^j \big( e^{ik_1 \hat x_1 \cdot (\cdot)} \sigma^2 \big) } \nonumber\\ \leq & \ C |D|^{1/2} \cdot \sum_{j \geq 1} k_2^{-j} \nrm[L^2(D)]{e^{ik_1 \hat x_1 \cdot (\cdot)} \sigma^2} = \mathcal{O}(k_2^{-1}), \quad k \to +\infty. \end{align*} \sq{To prove \eqref{eq:hotFjF1-SchroEqu2018} for the case where $j = 1$, we split $\mathbb{E} (\overline{F_1} F_1)$ into four terms, \begin{equation} \label{eq:FGr-SchroEqu2018} \mathbb{E} (\overline{F_1} F_1) = \mathbb{E} (\overline{G_1} G_1) + \mathbb{E} (\overline{r_1} r_2) - \mathbb{E} (\overline{r_2} r_2) + \mathbb{E} (\overline{r_2} r_1). \end{equation} We estimate these four terms on the right-hand-side of \eqref{eq:FGr-SchroEqu2018} one by one.} First, we estimate \begin{align} & \ \big| \mathbb{E} \big( \overline{G_1(k_1,\hat x_1,\omega)} \cdot G_1(k_2,\hat x_2,\omega) \big) \big| \nonumber\\ = & \ \Big| \iint_{D_y \times D_z} e^{-ik_1 \hat x_1 \cdot y} e^{ik_2 \hat x_2 \cdot z} V(y) \overline V(z) \cdot \mathbb{E} \big[ \int_{D_s} \Phi(y,s) \sigma(s) \dif{B_s} \cdot \int_{D_t} \overline \Phi(z,t) \sigma(t) \dif{B_t} \big] \dif{y} \dif{z} \Big| \nonumber\\ = & \ \Big| \iint_{D_y \times D_z} e^{-ik_1 \hat x_1 \cdot y} e^{ik_2 \hat x_2 \cdot z} V(y) \overline V(z) \cdot \big[ \int_{D_s} \Phi(y,s) \sigma(s) \overline \Phi(z,s) \sigma(s) \dif{s} \big] \dif{y} \dif{z} \Big| \nonumber\\ = & \ \Big| \int_{D} \sigma^2(s) \cdot \mathcal{R}_{k_1} V( e^{-ik_1 \hat x_1 \cdot (\cdot)} )(s) \cdot \overline{ \mathcal{R}_{k_2} V ( e^{-ik_2 \hat x_2 \cdot (\cdot)} )(s) } \dif{s} \Big| \nonumber\\ \leq & \ C k_1^{-1} k_2^{-1} \nrm[L^\infty(D)]{\sigma}^2 \quad\big( \text{Lemma \ref{lemma:RkVBounded-SchroEqu2018}} \big) \nonumber\\ = & \ \mathcal{O}(k_1^{-1} k_2^{-1}), \quad k \to +\infty. \label{eq:hotG1G1-SchroEqu2018} \end{align} Then we estimate \begin{align} & \ \big| \mathbb{E} \big( \overline{r_1(k_1,\hat x_1,\omega)} \cdot r_2(k_2,\hat x_2,\omega) \big) \big| \leq \mathbb{E} \Big( \sum_{j \geq 1} \big| G_j(k_1,\hat x_1,\omega) \big| \times \sum_{\ell \geq 2} \big| G_\ell(k_2,\hat x_2,\omega) \big| \Big) \nonumber\\ = & \ \mathbb{E} \Big( \sum_{j \geq 1} \big| \int_D e^{-ik_1 \hat x_1 \cdot y} (V \mathcal{R}_{k_1})^j (\sigma \dot{B}_y) \dif{y} \big| \times \sum_{\ell \geq 2} \big| \int_D e^{-ik_2 \hat x_2 \cdot z} (V \mathcal{R}_{k_2})^\ell (\sigma \dot{B}_z) \dif{z} \big| \Big) \nonumber\\ = & \ \nrm[L^\infty(D)]{V}^2 |D| \cdot \mathbb{E} \Big( \sum_{j \geq 0} \nrm[L^2(D)]{(\mathcal{R}_{k_1} V)^j [\mathcal{R}_{k_1}(\sigma \dot{B})]} \times \sum_{\ell \geq 1} \nrm[L^2(D)]{(\mathcal{R}_{k_2} V)^\ell [\mathcal{R}_{k_2}(\sigma \dot{B})]} \Big) \nonumber\\ \leq & \ C \nrm[L^\infty(D)]{V}^2 |D| \cdot \mathbb{E} \Big( \sum_{j \geq 0} \big( k_1^{-j} \nrm[L^2(D)]{\mathcal{R}_{k_1}(\sigma \dot{B})} \big) \times \sum_{\ell \geq 1} \big( k_2^{-\ell} \nrm[L^2(D)]{\mathcal{R}_{k_2}(\sigma \dot{B})} \big) \Big) \nonumber\\ \leq & \ \nrm[L^\infty(D)]{V}^2 |D| \cdot \frac {k_1} {k_1-1} \cdot \frac 1 {k_2-1} \cdot \frac 1 2 \mathbb{E} \big( \nrm[L^2(D)]{\mathcal{R}_{k_1}(\sigma \dot{B})}^2 + \nrm[L^2(D)]{\mathcal{R}_{k_2}(\sigma \dot{B})}^2 \big). \label{eq:hotr1r2Inter1-SchroEqu2018} \end{align} Utilizing \eqref{eq:Rksigma2Bounded-SchroEqu2018}, we obtain \begin{equation} \mathbb{E} \big( \nrm[L^2(D)] {{\mathcal{R}_{k}}(\sigma \dot{B})}^2 \big)
3,429
47,422
en
train
0.71.10
= & \ \Big| \iint_{D_y \times D_z} e^{-ik_1 \hat x_1 \cdot y} e^{ik_2 \hat x_2 \cdot z} V(y) \overline V(z) \cdot \mathbb{E} \big[ \int_{D_s} \Phi(y,s) \sigma(s) \dif{B_s} \cdot \int_{D_t} \overline \Phi(z,t) \sigma(t) \dif{B_t} \big] \dif{y} \dif{z} \Big| \nonumber\\ = & \ \Big| \iint_{D_y \times D_z} e^{-ik_1 \hat x_1 \cdot y} e^{ik_2 \hat x_2 \cdot z} V(y) \overline V(z) \cdot \big[ \int_{D_s} \Phi(y,s) \sigma(s) \overline \Phi(z,s) \sigma(s) \dif{s} \big] \dif{y} \dif{z} \Big| \nonumber\\ = & \ \Big| \int_{D} \sigma^2(s) \cdot \mathcal{R}_{k_1} V( e^{-ik_1 \hat x_1 \cdot (\cdot)} )(s) \cdot \overline{ \mathcal{R}_{k_2} V ( e^{-ik_2 \hat x_2 \cdot (\cdot)} )(s) } \dif{s} \Big| \nonumber\\ \leq & \ C k_1^{-1} k_2^{-1} \nrm[L^\infty(D)]{\sigma}^2 \quad\big( \text{Lemma \ref{lemma:RkVBounded-SchroEqu2018}} \big) \nonumber\\ = & \ \mathcal{O}(k_1^{-1} k_2^{-1}), \quad k \to +\infty. \label{eq:hotG1G1-SchroEqu2018} \end{align} Then we estimate \begin{align} & \ \big| \mathbb{E} \big( \overline{r_1(k_1,\hat x_1,\omega)} \cdot r_2(k_2,\hat x_2,\omega) \big) \big| \leq \mathbb{E} \Big( \sum_{j \geq 1} \big| G_j(k_1,\hat x_1,\omega) \big| \times \sum_{\ell \geq 2} \big| G_\ell(k_2,\hat x_2,\omega) \big| \Big) \nonumber\\ = & \ \mathbb{E} \Big( \sum_{j \geq 1} \big| \int_D e^{-ik_1 \hat x_1 \cdot y} (V \mathcal{R}_{k_1})^j (\sigma \dot{B}_y) \dif{y} \big| \times \sum_{\ell \geq 2} \big| \int_D e^{-ik_2 \hat x_2 \cdot z} (V \mathcal{R}_{k_2})^\ell (\sigma \dot{B}_z) \dif{z} \big| \Big) \nonumber\\ = & \ \nrm[L^\infty(D)]{V}^2 |D| \cdot \mathbb{E} \Big( \sum_{j \geq 0} \nrm[L^2(D)]{(\mathcal{R}_{k_1} V)^j [\mathcal{R}_{k_1}(\sigma \dot{B})]} \times \sum_{\ell \geq 1} \nrm[L^2(D)]{(\mathcal{R}_{k_2} V)^\ell [\mathcal{R}_{k_2}(\sigma \dot{B})]} \Big) \nonumber\\ \leq & \ C \nrm[L^\infty(D)]{V}^2 |D| \cdot \mathbb{E} \Big( \sum_{j \geq 0} \big( k_1^{-j} \nrm[L^2(D)]{\mathcal{R}_{k_1}(\sigma \dot{B})} \big) \times \sum_{\ell \geq 1} \big( k_2^{-\ell} \nrm[L^2(D)]{\mathcal{R}_{k_2}(\sigma \dot{B})} \big) \Big) \nonumber\\ \leq & \ \nrm[L^\infty(D)]{V}^2 |D| \cdot \frac {k_1} {k_1-1} \cdot \frac 1 {k_2-1} \cdot \frac 1 2 \mathbb{E} \big( \nrm[L^2(D)]{\mathcal{R}_{k_1}(\sigma \dot{B})}^2 + \nrm[L^2(D)]{\mathcal{R}_{k_2}(\sigma \dot{B})}^2 \big). \label{eq:hotr1r2Inter1-SchroEqu2018} \end{align} Utilizing \eqref{eq:Rksigma2Bounded-SchroEqu2018}, we obtain \begin{equation} \mathbb{E} \big( \nrm[L^2(D)] {{\mathcal{R}_{k}}(\sigma \dot{B})}^2 \big) \leq C \mathbb{E} \big( \nrm[L_{-1/2-\epsilon}^2] {{\mathcal{R}_{k}}(\sigma \dot{B})}^2 \big) \leq C_D < +\infty. \label{eq:hotr1r2Inter2-SchroEqu2018} \end{equation} From (\ref{eq:hotr1r2Inter1-SchroEqu2018})-(\ref{eq:hotr1r2Inter2-SchroEqu2018}) we arrive at \begin{equation} \label{eq:hotr1r2-SchroEqu2018} \big| \mathbb{E} \big( \overline{r_1(k_1,\hat x_1,\omega)} \cdot r_2(k_2,\hat x_2,\omega)\big) \big| \leq \mathcal{O}(k_2^{-1}), \quad k \to +\infty. \end{equation} Mimicking (\ref{eq:hotr1r2Inter1-SchroEqu2018})-(\ref{eq:hotr1r2Inter2-SchroEqu2018}), one can obtain \begin{equation} \label{eq:hotr2r1-SchroEqu2018} \big| \mathbb{E} \big( \overline{r_2(k_1,\hat x_1,\omega)} \cdot r_1(k_2,\hat x_2,\omega) \big) \big| \leq \mathcal{O}(k_1^{-1}), \quad k \to +\infty. \end{equation} By modify $\sum_{j \geq 0} k_1^{-j}$ to $\sum_{j \geq 1} k_1^{-j}$ in (\ref{eq:hotr1r2Inter1-SchroEqu2018}), one can conclude \begin{equation} \label{eq:hotr2r2-SchroEqu2018} \big| \mathbb{E} \big( \overline{r_2(k_1,\hat x_1,\omega)} \cdot r_2(k_2,\hat x_2,\omega) \big) \big| \leq \mathcal{O}(k_1^{-1}k_2^{-1}), \quad k \to +\infty. \end{equation} Combining \eqref{eq:FGr-SchroEqu2018}-\eqref{eq:hotG1G1-SchroEqu2018} and \eqref{eq:hotr1r2-SchroEqu2018}-\eqref{eq:hotr2r2-SchroEqu2018}, we arrive at \eqref{eq:hotFjF1-SchroEqu2018} for the case where $j = 1$. The proof is complete. \end{proof}
2,011
47,422
en
train
0.71.11
Lemma \ref{lemma:HOTErgo-SchroEqu2018} is the ergodic version of Lemma \ref{lemma:HOT-SchroEqu2018}. \begin{lem} \label{lemma:HOTErgo-SchroEqu2018} Write \begin{align*} X_{p,q}(K,\tau,\hat x,\omega) & = \frac 1 K \int_K^{2K} \overline{F_p(k,\hat x,\omega)} \cdot F_q(k+\tau,\hat x,\omega) \dif{k}, \ \ \text{for} \ \ (p,q) \in \{ (0,1), (1,0), (1,1) \}. \end{align*} Then for any $\hat x \in \mathbb{S}^2$ and any $\tau \geq 0$, we have the following estimates as $K \to +\infty$, \begin{align} \big| \mathbb{E} (X_{p,q}(K,\tau,\hat x,\omega)) \big| & = \mathcal{O}(K^{-1}), \ \mathbb{E} (|X_{p,q}(K,\tau,\hat x,\omega)|^2) = \mathcal{O}(K^{-5/4}), \label{eq:hotF0F1Ergo-SchroEqu2018} \\ \big| \mathbb{E} (X_{1,1}(K,\tau,\hat x,\omega)) \big| & = \mathcal{O}(K^{-1}), \ \mathbb{E} (|X_{1,1}(K,\tau,\hat x,\omega)|^2) = \mathcal{O}(K^{-2}), \label{eq:hotF1F1Ergo-SchroEqu2018} \end{align} for $(p,q) \in \{ (0,1), (1,0) \}$. Let $\{K_j\} \in P(4/5+\gamma)$. Then for any $\tau \geq 0$, we have \begin{equation} \label{eq:HOTErgoToZero-SchroEqu2018} \lim_{j \to +\infty} X_{p,q}(K_j,\tau,\hat x,\omega) = 0 \quad \textrm{~a.s.~}, \end{equation} for every $(p,q) \in \{ (0,1), (1,0), (1,1) \}$. \end{lem} We may denote $X_{p,q}(K,\tau,\hat x,\omega)$ as $X_{p,q}$ for short if it is clear in the context. \begin{proof}[Proof of Lemma \ref{lemma:HOTErgo-SchroEqu2018}] According to Lemma \ref{lemma:HOT-SchroEqu2018}, we have \begin{align} \mathbb{E} \big( X_{0,1} \big) & = \frac 1 K \int_K^{2K} \mathbb{E} \big( \overline{F_0(k,\hat x,\omega)} \cdot F_1(k+\tau,\hat x,\omega) \big) \dif{k} \nonumber\\ & = \mathcal{O}(K^{-1}), \quad K \to +\infty. \label{eq:hotF0F1Ergo1-SchroEqu2018} \end{align} By (\ref{eq:I0-SchroEqu2018}), Isserlis' Theorem and Lemma \ref{lemma:LeadingTermTechnical-SchroEqu2018}, we compute the secondary moment of $X_{0,1}$ as \begin{align} & \ \mathbb{E} \big( | X_{0,1} |^2 \big) \nonumber\\ = & \ \frac 1 {K^2} \int_K^{2K} \int_K^{2K} \mathbb{E} \big( F_0(k_1,\hat x,\omega) \overline{F_1(k_1+\tau,\hat x,\omega)} \big) \cdot \mathbb{E} \big( \overline{F_0(k_2,\hat x,\omega)} F_1(k_2+\tau,\hat x,\omega) \big) \nonumber\\ & \ + \mathbb{E} \big( F_0(k_1,\hat x,\omega) \overline{F_0(k_2,\hat x,\omega)} \big) \cdot \mathbb{E} \big( \overline{F_1(k_1+\tau,\hat x,\omega)} F_1(k_2+\tau,\hat x,\omega) \big) \nonumber\\ & \ + \mathbb{E} \big( F_0(k_1,\hat x,\omega) F_1(k_2+\tau,\hat x,\omega) \big) \cdot \mathbb{E} \big( \overline{F_1(k_1+\tau,\hat x,\omega)} \, \overline{F_0(k_2,\hat x,\omega)} \big) \dif{k_1} \dif{k_2} \nonumber\\ = & \ \frac 1 {K^2} \int_K^{2K} \int_K^{2K} \mathcal{O}(K^{-2}) + (2\pi)^{3/2} \widehat{\sigma^2} ((k_1-k_2) \hat x) \cdot \mathcal{O}(K^{-1}) + \mathcal{O}(K^{-2}) \dif{k_1} \dif{k_2} \nonumber\\ = & \ \mathcal{O}(K^{-1/4}) \cdot \mathcal{O}(K^{-1}) + \mathcal{O}(K^{-2}) \quad(\text{H\"older ineq. and Lemma } \ref{lemma:LeadingTermTechnical-SchroEqu2018}) \nonumber\\ = & \ \mathcal{O}(K^{-5/4}), \quad K \to +\infty. \label{eq:hotF0F1Ergo2-SchroEqu2018} \end{align} From \eqref{eq:hotF0F1Ergo1-SchroEqu2018}-\eqref{eq:hotF0F1Ergo2-SchroEqu2018} we obtain \eqref{eq:hotF0F1Ergo-SchroEqu2018} for the case where $(p,q) = (0,1)$. Using similar arguments, formula \eqref{eq:hotF0F1Ergo-SchroEqu2018} for $(p,q) = (1,0)$ can be proved and we skip the details. By Chebyshev's inequality and (\ref{eq:hotF0F1Ergo2-SchroEqu2018}), for any $\epsilon > 0$, we have \begin{align} \qquad & \ P \big( \bigcup_{j \geq K_0} \{ |X_{0,1}(K_j, \tau, \hat x,\omega) - 0| \geq \epsilon \} \big) \leq \frac C {\epsilon^2} \sum_{j \geq K_0} K_j^{-5/4} \leq \frac C {\epsilon^2} \sum_{j \geq K_0} j^{-1-5\gamma/4} \nonumber\\ \leq & \ \frac C {\epsilon^2} \int_{K_0}^{+\infty} (t-1)^{-1-5\gamma/4} \dif{t} = \frac C {\epsilon^2 \gamma} (K_0-1)^{-5\gamma/4} \to 0, \quad K_0 \to +\infty. \label{eq:X01Ergo-SchroEqu2018} \end{align} According to Lemma \ref{lem:asconverg-SchroEqu2018}, inequality \eqref{eq:X01Ergo-SchroEqu2018} implies \eqref{eq:HOTErgoToZero-SchroEqu2018} for the case where $(p,q) = (0,1)$. Similarly, formula \eqref{eq:HOTErgoToZero-SchroEqu2018} can be proved for the case where $(p,q) = (1,0)$. We now prove \eqref{eq:hotF1F1Ergo-SchroEqu2018}. We have \begin{align} \mathbb{E} \big( X_{1,1} \big) & = \frac 1 K \int_K^{2K} \mathbb{E} \big( \overline{F_1(k,\hat x,\omega)} \cdot F_1(k+\tau,\hat x,\omega) \big) \dif{k} = \mathcal{O}(K^{-1}). \label{eq:hotF1F1Ergo1-SchroEqu2018} \end{align} Similar to \eqref{eq:hotF0F1Ergo2-SchroEqu2018}, we compute the secondary moment of $X_{1,1}$ as \begin{align} & \ \mathbb{E} \big( | X_{1,1} |^2 \big) \nonumber\\ = & \ \mathbb{E} \big( \frac 1 K \int_K^{2K} F_1(k_1,\hat x,\omega) \cdot \overline{F_1(k_1+\tau,\hat x,\omega)} \dif{k_1} \cdot \frac 1 K \int_K^{2K} \overline{ F_1(k_2,\hat x,\omega) } \cdot F_1(k_2+\tau,\hat x,\omega) \dif{k_2} \big) \nonumber\\ = & \ \frac 1 {K^2} \int_K^{2K} \int_K^{2K} \mathcal{O}(K^{-1}) \cdot \mathcal{O}(K^{-1}) \dif{k_1} \dif{k_2} \quad (\text{Lemma } \ref{lemma:HOT-SchroEqu2018}) \nonumber\\ = & \ \mathcal{O}(K^{-2}), \quad K \to +\infty. \label{eq:hotF1F1Ergo2-SchroEqu2018} \end{align} Formulae \eqref{eq:hotF1F1Ergo1-SchroEqu2018} and \eqref{eq:hotF1F1Ergo2-SchroEqu2018} give \eqref{eq:hotF1F1Ergo-SchroEqu2018}. By Chebyshev's inequality and \eqref{eq:hotF1F1Ergo2-SchroEqu2018}, for any $\epsilon > 0$, we have \begin{align} & P \big( \bigcup_{j \geq K_0} \{ |X_{1,1} - 0| \geq \epsilon \} \big) \leq \frac C {\epsilon^2} \sum_{j \geq K_0} K_j^{-2} \leq \frac C {\epsilon^2} \sum_{j \geq K_0} j^{-8/5-2\gamma} \nonumber\\ \leq & \frac C {\epsilon^2} \int_{K_0}^{+\infty} (t-1)^{-8/5-2\gamma} \dif{t} = \frac {C (K_0-1)^{-3/5-2\gamma}} {\epsilon^2 (3+10\gamma)} \to 0, \quad K_0 \to +\infty. \label{eq:X11Ergo-SchroEqu2018} \end{align} Lemma \ref{lem:asconverg-SchroEqu2018} together with \eqref{eq:X11Ergo-SchroEqu2018} implies \eqref{eq:HOTErgoToZero-SchroEqu2018} for the case that $(p,q) = (1,1)$. The proof is thus completed. \end{proof}
3,067
47,422
en
train
0.71.12
\section{The recovery of the variance function} \label{sec:RecVar-SchroEqu2018} In this section we focus on the recovery of the variance function. We employ only a single passive scattering measurement. Namely, there is no incident plane wave sent and the random sample $\omega$ is fixed. Throughout this section, $\alpha$ is set to be 0. The data set $\mathcal M_1$ is utilized to achieve the unique recovery result. We present the main results of recovering the variance function in Section \ref{subsec:MainSteps-SchroEqu2018}, and put the corresponding proofs in Section \ref{subsec:ProofsToMainSteps-SchroEqu2018}. \subsection{Main unique recovery results} \label{subsec:MainSteps-SchroEqu2018} To make it clearer, we use three lemmas, i.e., Lemmas \ref{lem:sigmaHatRec-SchroEqu2018}, \ref{lem:sigmaHatRecErgo-SchroEqu2018} and \ref{lem:sigmaHatRecSingle-SchroEqu2018}, to illustrate our recovering scheme of the variance function. The first main result is as follows. \begin{lem} \label{lem:sigmaHatRec-SchroEqu2018} We have the following asymptotic identity, \begin{equation} \label{eq:sigmaHatRec-SchroEqu2018} 4\sqrt{2\pi} \lim_{k \to +\infty} \mathbb{E} \Big( \big[ \overline{u^\infty(\hat x, k, \omega)} - \overline{\mathbb{E} u^\infty(\hat x,k)}\, \big] \cdot \big[ u^\infty(\hat x, k+\tau, \omega) - \mathbb{E} u^\infty(\hat x,k+\tau) \big] \Big) = \widehat{\sigma^2}(\tau \hat x), \end{equation} where $\tau \geq 0,~ \hat x \in \mathbb{S}^2$. \end{lem} Lemma \ref{lem:sigmaHatRec-SchroEqu2018} clearly yields a recovery formula for the variance function. However, it requires many realizations. The result in Lemma \ref{lem:sigmaHatRec-SchroEqu2018} can be improved by using the ergodicity. See, e.g., \cite{caro2016inverse, Lassas2008, Helin2018}. \begin{lem} \label{lem:sigmaHatRecErgo-SchroEqu2018} Assume $\{K_j\} \in P(2+\gamma)$. Then $\exists\, \Omega_0 \subset \Omega \colon \mathbb{P}(\Omega_0) = 0$, $\Omega_0$ depends only on $\{K_j\}_{j \in \mathbb{N}^+}$, such that for any $\omega \in \Omega \backslash \Omega_0$, there exists $S_\omega \subset \R^3 \colon m(S_\omega) = 0$, such that for $\forall x \in \R^3 \backslash S_\omega$, \begin{align} & 4\sqrt{2\pi} \lim_{j \to +\infty} \frac 1 {K_j} \int_{K_j}^{2K_j} \big[ \overline{u^\infty(\hat x,k,\omega)} - \overline{\mathbb{E} u^\infty(\hat x,k)}\, \big] \cdot \big[ u^\infty(\hat x,k+\tau,\omega) - \mathbb{E} u^\infty(\hat x,k+\tau) \big] \dif{k} \nonumber\\ & = \widehat{\sigma^2} (x), \label{eq:SecondOrderErgo-SchroEqu2018} \end{align} where $\tau = |x|$ and $\hat x := x / |x|$. \end{lem} \sq{The recovering formula \eqref{eq:SecondOrderErgo-SchroEqu2018} holds for any $\hat x \in \mathbb S^2$ when $x = 0$.} The recovery formulae presented in Lemma \ref{lem:sigmaHatRecErgo-SchroEqu2018} still involves every realization of the random sample $\omega$. To recover the variance function by only one realization, the term $\mathbb{E} u^\infty(\hat x,k)$ should be further relaxed in Lemma \ref{lem:sigmaHatRecErgo-SchroEqu2018}, and this is achieved by Lemma \ref{lem:sigmaHatRecSingle-SchroEqu2018}. \begin{lem} \label{lem:sigmaHatRecSingle-SchroEqu2018} Under the same condition as in Lemma \ref{lem:sigmaHatRecErgo-SchroEqu2018}, we have \begin{equation} \label{eq:sigmaHatRecSingle-SchroEqu2018} 4\sqrt{2\pi} \lim_{j \to +\infty} \frac 1 {K_j} \int_{K_j}^{2K_j} \overline{u^\infty(\hat x,k,\omega)} \cdot u^\infty(\hat x,k+\tau,\omega) \dif{k} = \widehat{\sigma^2} (x), \quad \textrm{~a.s.~}. \end{equation} \end{lem} \sq{ \begin{rem} In Lemma \ref{lem:sigmaHatRecSingle-SchroEqu2018}, it should be noted that the left-hand-side of \eqref{eq:sigmaHatRecSingle-SchroEqu2018} contains the random sample $\omega$, while the right-hand-side does not. This means that the limit in \eqref{eq:sigmaHatRecSingle-SchroEqu2018} is statistically stable. \end{rem}} Now Theorem \ref{thm:Unisigma-SchroEqu2018} becomes a direct consequence of Lemma \ref{lem:sigmaHatRecSingle-SchroEqu2018}. \begin{proof}[Proof of Theorem \ref{thm:Unisigma-SchroEqu2018}] Lemma \ref{lem:sigmaHatRecSingle-SchroEqu2018} provides a recovery formula for the variance function $\sigma^2$ by the data set $\mathcal M_1$. \end{proof} \subsection{Proofs of the main results} \label{subsec:ProofsToMainSteps-SchroEqu2018} In this subsection, we present proofs of Lemmas \ref{lem:sigmaHatRec-SchroEqu2018}, \ref{lem:sigmaHatRecErgo-SchroEqu2018} and \ref{lem:sigmaHatRecSingle-SchroEqu2018}. \begin{proof}[Proof of Lemma \ref{lem:sigmaHatRec-SchroEqu2018}] \sq{Write $u_1^\infty(\hat x,k,\omega) = u^\infty(\hat x,k,\omega) - \mathbb{E} u^\infty(\hat x,k)$ as in \eqref{eq:u1-SchroEqu2018}. Therefore $4\pi u_1^\infty(\hat x,k,\omega)$ equals to $(-1) \sum_{j=0}^{+\infty} \int_D e^{-ik \hat x \cdot y} (V {\mathcal{R}_{k}})^j (\sigma \dot{B}_y)\dif{y}$. Recall the definition of $F_j(k,\hat x,\omega)$ $(j = 0,1)$ in \eqref{eq:Fjkx-SchroEqu2018}.} Let $k_1, k_2 > k > k^*$. One can compute \begin{align} 16\pi^2 \mathbb{E} \big( \overline{u_1^\infty(\hat x,k_1,\omega)} u_1^\infty(\hat x,k_2,\omega) \big) & = \sq{\sum_{j,\ell = 0,1}} \mathbb{E} \big( \overline{F_j(k_1,\hat x,\omega)} F_\ell(k_2,\hat x,\omega) \big) \nonumber\\ & =: I_0 + I_1 + I_2 + I_3. \label{eq:Thm1uI-SchroEqu2018} \end{align} From Lemma \ref{lemma:HOT-SchroEqu2018}, we have $I_1,\, I_2,\, I_3$ are all of order $k^{-1}$, hence \begin{equation} \label{eq:u1Infty-SchroEqu2018} 16\pi^2 \mathbb{E} \big( \overline{u_1^\infty(\hat x,k_1,\omega)} u_1^\infty(\hat x,k_2,\omega) \big) = I_0 + \mathcal{O}(k^{-1}), \quad k \to +\infty. \end{equation} By \eqref{eq:I0-SchroEqu2018}, \eqref{eq:Thm1uI-SchroEqu2018} and \eqref{eq:u1Infty-SchroEqu2018}, we have $$16\pi^2 \lim_{k \to +\infty} \mathbb{E} \big( \overline{u_1^\infty (\hat x,k_1,\omega)} u_1^\infty (\hat x,k_2,\omega) \big) = (2\pi)^{3/2} \,\widehat{\sigma^2}((k_2 - k_1) \hat x),$$ which implies (\ref{eq:sigmaHatRec-SchroEqu2018}). \end{proof}
2,499
47,422
en
train
0.71.13
\subsection{Proofs of the main results} \label{subsec:ProofsToMainSteps-SchroEqu2018} In this subsection, we present proofs of Lemmas \ref{lem:sigmaHatRec-SchroEqu2018}, \ref{lem:sigmaHatRecErgo-SchroEqu2018} and \ref{lem:sigmaHatRecSingle-SchroEqu2018}. \begin{proof}[Proof of Lemma \ref{lem:sigmaHatRec-SchroEqu2018}] \sq{Write $u_1^\infty(\hat x,k,\omega) = u^\infty(\hat x,k,\omega) - \mathbb{E} u^\infty(\hat x,k)$ as in \eqref{eq:u1-SchroEqu2018}. Therefore $4\pi u_1^\infty(\hat x,k,\omega)$ equals to $(-1) \sum_{j=0}^{+\infty} \int_D e^{-ik \hat x \cdot y} (V {\mathcal{R}_{k}})^j (\sigma \dot{B}_y)\dif{y}$. Recall the definition of $F_j(k,\hat x,\omega)$ $(j = 0,1)$ in \eqref{eq:Fjkx-SchroEqu2018}.} Let $k_1, k_2 > k > k^*$. One can compute \begin{align} 16\pi^2 \mathbb{E} \big( \overline{u_1^\infty(\hat x,k_1,\omega)} u_1^\infty(\hat x,k_2,\omega) \big) & = \sq{\sum_{j,\ell = 0,1}} \mathbb{E} \big( \overline{F_j(k_1,\hat x,\omega)} F_\ell(k_2,\hat x,\omega) \big) \nonumber\\ & =: I_0 + I_1 + I_2 + I_3. \label{eq:Thm1uI-SchroEqu2018} \end{align} From Lemma \ref{lemma:HOT-SchroEqu2018}, we have $I_1,\, I_2,\, I_3$ are all of order $k^{-1}$, hence \begin{equation} \label{eq:u1Infty-SchroEqu2018} 16\pi^2 \mathbb{E} \big( \overline{u_1^\infty(\hat x,k_1,\omega)} u_1^\infty(\hat x,k_2,\omega) \big) = I_0 + \mathcal{O}(k^{-1}), \quad k \to +\infty. \end{equation} By \eqref{eq:I0-SchroEqu2018}, \eqref{eq:Thm1uI-SchroEqu2018} and \eqref{eq:u1Infty-SchroEqu2018}, we have $$16\pi^2 \lim_{k \to +\infty} \mathbb{E} \big( \overline{u_1^\infty (\hat x,k_1,\omega)} u_1^\infty (\hat x,k_2,\omega) \big) = (2\pi)^{3/2} \,\widehat{\sigma^2}((k_2 - k_1) \hat x),$$ which implies (\ref{eq:sigmaHatRec-SchroEqu2018}). \end{proof} \begin{proof}[Proof of Lemma \ref{lem:sigmaHatRecErgo-SchroEqu2018}] Our proof is divided into two steps. In the first step we give a basic result, i.e., the conclusion \eqref{eq:SecondOrderErgo2-SchroEqu2018}, and in the second step the logical order between $y$ and $\omega$ in \eqref{eq:SecondOrderErgo2-SchroEqu2018} is exchanged. \noindent \textbf{Step 1}: give a basic result. We denote by $\mathcal{E}_k$ the averaging operation w.r.t. $k$: ${{\mathcal{E}_k f} = \frac 1 K \int_K^{2K} f(k) \dif{k}}$. Following the notation conventions in the proof of Lemma \ref{lem:sigmaHatRec-SchroEqu2018}, we have \begin{align} 16\pi^2 \mathcal{E}_k \big( \overline{u_1^\infty(\hat x,k,\omega)} u_1^\infty(\hat x,k+\tau,\omega) \big) & = \jz{\sum_{j,\ell = 0,1}} \mathcal{E}_k \big( \overline{F_j(k,\hat x,\omega)} F_\ell(k+\tau,\hat x,\omega) \big) \nonumber\\ & =: X_{0,0} + X_{0,1}+ X_{1,0} + X_{1,1}. \label{eq:Thm2uX-SchroEqu2018} \end{align} Recall that $\{K_j\} \in P(2+\gamma)$.~Then, for $\forall \tau \geq 0$ and $\forall \hat x \in \mathbb{S}^2$, Lemma \ref{lemma:LeadingTermErgo-SchroEqu2018} implies that $\exists\, \Omega_{\tau,\hat x}^{0,0} \subset \Omega \colon \mathbb{P}(\Omega_{\tau,\hat x}^{0,0}) = 0$, $\Omega_{\tau,\hat x}^{0,0}$ depending on $\tau$ and $\hat x$, such that \begin{equation} \label{eq:Thm2X00-SchroEqu2018} \lim_{j \to +\infty} X_{0,0}(K_j,\tau,\hat x,\omega) = (2\pi)^{3/2} \widehat{\sigma^2} (\tau \hat x), \quad \forall \omega \in \Omega \backslash \Omega_{\tau,\hat x}^{0,0}. \end{equation} $\{K_j\} \in P(2+\gamma)$ implies $\{K_j\} \in P(5/4+\gamma)$, so Lemma \ref{lemma:HOTErgo-SchroEqu2018} implies the existence of the sets $\Omega_{\tau,\hat x}^{p,q} ~\big( (p,q) \in \{ (0,1),\, (1,0),\, (1,1) \} \big)$ with zero probability measures such that $\forall \tau \geq 0$ and $\forall \hat x \in \mathbb{S}^2$, \begin{equation} \label{eq:Thm2Xpq-SchroEqu2018} \lim_{j \to +\infty} X_{p,q}(K_j,\tau,\hat x,\omega) = 0, \quad \forall \omega \in \Omega \backslash \Omega_{\tau,\hat x}^{p,q}. \end{equation} for all $(p,q) \in \{ (0,1),\, (1,0),\, (1,1) \}$. Write $\Omega_{\tau,\hat x} = \bigcup_{p,q = 0,1} \Omega_{\tau,\hat x}^{p,q}$\,, then $\mathbb{P} (\Omega_{\tau,\hat x}) = 0$. From Lemmas \ref{lemma:LeadingTermErgo-SchroEqu2018} and \ref{lemma:HOTErgo-SchroEqu2018} we note that $\Omega_{\tau,\hat x}^{p,q}$ also depends on $K_j$, so does $\Omega_{\tau,\hat x}$, but we omit this dependence in the notation. Write \[ Z(\tau\hat x,\omega) := \lim_{j \to +\infty} \frac {16\pi^2} {K_j} \int_{K_j}^{2K_j} \overline{u_1^\infty(\hat x,k,\omega)} u_1^\infty(\hat x,k+\tau,\omega) \dif{k} - (2\pi)^{3/2} \widehat{\sigma^2} (\tau \hat x) \] for short. By (\ref{eq:Thm2uX-SchroEqu2018})-(\ref{eq:Thm2Xpq-SchroEqu2018}), we conclude that, \begin{equation} \label{eq:SecondOrderErgo2-SchroEqu2018} {\,\forall\,} y \in \R^3, {\,\exists\,} \Omega_y \subset \Omega \colon \mathbb P (\Omega_y) = 0, \textrm{~s.t.~} \forall\, \omega \in \Omega \backslash \Omega_y,\, Z(y,\omega) = 0. \end{equation} \noindent \textbf{Step 2}: exchange the logical order. To conclude \eqref{eq:SecondOrderErgo-SchroEqu2018} from \eqref{eq:SecondOrderErgo2-SchroEqu2018}, we should exchange the logical order between $y$ and $\omega$. To achieve this, we utilize Fubini's Theorem. Denote the usual Lebesgue measure on $\R^3$ as $\mathbb L$ and the product measure $\mathbb L \times \mathbb P$ as $\mu$, and construct the product measure space $\mathbb M := (\R^3 \times \Omega, \mathcal G, \mu)$ in the canonical way, where $\mathcal G$ is the corresponding complete $\sigma$-algebra. Write \[ \mathcal{A} := \{ (y,\omega) \in \R^3 \times \Omega \,;\, Z(y, \omega) \neq 0 \}, \] then $\mathcal{A}$ is a subset of $\mathbb M$. Set $\chi_\mathcal{A}$ as the characteristic function of $\mathcal{A}$ in $\mathbb M$. By \eqref{eq:SecondOrderErgo2-SchroEqu2018} we obtain \begin{equation} \label{eq:FubiniEq0-SchroEqu2018} \int_{R^3} \big( \int_\Omega \chi_{\mathcal A}(y,\omega) \dif{\mathbb P(\omega)} \big) \dif{\mathbb L(y)} = 0. \end{equation} By \eqref{eq:FubiniEq0-SchroEqu2018} and [Corollary 7 in Section 20.1, \citen{royden2000real}] we obtain \begin{equation} \label{eq:FubiniEq1-SchroEqu2018} \int_{\mathbb M} \chi_{\mathcal A}(y,\omega) \dif{\mathbb \mu} = \int_\Omega \big( \int_{R^3} \chi_{\mathcal A}(y,\omega) \dif{\mathbb L(y)} \big) \dif{\mathbb P(\omega)} = 0. \end{equation} Because $\chi_{\mathcal A}(y,\omega)$ is non-negative, (\ref{eq:FubiniEq1-SchroEqu2018}) implies \begin{equation} \label{eq:FubiniEq2-SchroEqu2018} {\,\exists\,} \Omega_0 \colon \mathbb P (\Omega_0) = 0, \textrm{~s.t.~} \forall\, \omega \in \Omega \backslash \Omega_0,\, \int_{R^3} \chi_{\mathcal A}(y,\omega) \dif{\mathbb L(y)} = 0. \end{equation} Formula (\ref{eq:FubiniEq2-SchroEqu2018}) further implies for every $\omega \in \Omega \backslash \Omega_0$, \begin{equation} \label{eq:FubiniEq3-SchroEqu2018} {\,\exists\,} S_\omega \subset \R^3 \colon \mathbb L (S_\omega) = 0, \textrm{~s.t.~} \forall\, y \in \R^3 \backslash S_\omega,\, Z(y,\omega) = 0. \end{equation} From (\ref{eq:FubiniEq3-SchroEqu2018}) we arrive at (\ref{eq:SecondOrderErgo-SchroEqu2018}). \end{proof}
3,112
47,422
en
train
0.71.14
\begin{proof}[Proof of Lemma \ref{lem:sigmaHatRecSingle-SchroEqu2018}] The symbol $\mathcal{E}_k$ is defined the same as in the proof of Lemma \ref{lem:sigmaHatRecErgo-SchroEqu2018}. We have \begin{align} & 16\pi^2 \mathcal{E}_k \big( \overline{u^\infty(\hat x,k,\omega)} u^\infty(\hat x,k+\tau,\omega) \big) \nonumber\\ = \ & 16\pi^2 \mathcal{E}_k \big( \overline{u_1^\infty(\hat x,k,\omega)} \cdot u_1^\infty(\hat x,k,\omega) \big) + 16\pi^2 \mathcal{E}_k \big( \overline{u_1^\infty(\hat x,k,\omega)} \cdot \mathbb E u^\infty(\hat x,k+\tau) \big) \nonumber\\ & + 16\pi^2 \mathcal{E}_k \big( \overline{\mathbb E u^\infty(\hat x,k)} \cdot u_1^\infty(\hat x,k+\tau,\omega) \big) + 16\pi^2 \mathcal{E}_k \big( \overline{\mathbb E u^\infty(\hat x,k)} \cdot \mathbb E u^\infty(\hat x,k+\tau) \big) \nonumber\\ =: & J_0 + J_1 + J_2 + J_3. \label{eq:J-SchroEqu2018} \end{align} From Lemma \ref{lem:sigmaHatRecErgo-SchroEqu2018} we obtain \begin{equation} \label{eq:J0-SchroEqu2018} \begin{split} & \lim_{j \to +\infty} J_0 = \lim_{j \to +\infty} \jz{\frac {16\pi^2} {K_j}} \int_{K_j}^{2K_j} \overline{u_1^\infty(\hat x,k,\omega)} \cdot u_1^\infty(\hat x,k+\tau,\omega) \dif{k} = (2\pi)^{3/2} \widehat{\sigma^2} (\tau \hat x), \\ & \quad \tau \hat x \textrm{~a.e.~} \! \in \R^3, \quad \omega \textrm{~a.s.~} \! \in \Omega. \end{split} \end{equation} We now estimate $J_1$, \begin{align} |J_1|^2 & \simeq \big| \mathcal{E}_k \big( \overline{u_1^\infty(\hat x,k,\omega)} \cdot \mathbb E u^\infty(\hat x,k+\tau) \big) \big|^2 = \big| \frac 1 {K_j} \int_{K_j}^{2K_j} \overline{u_1^\infty(\hat x,k,\omega)} \cdot \mathbb E u^\infty(\hat x,k+\tau) \dif{k} \big|^2 \nonumber\\ & \leq \frac 1 {K_j} \int_{K_j}^{2K_j} |u^\infty(\hat x,k,\omega) - \mathbb{E} u^\infty(\hat x,k)|^2 \dif{k} \cdot \frac 1 {K_j} \int_{K_j}^{2K_j} |\mathbb E u^\infty(\hat x,k+\tau)|^2 \dif{k}. \label{eq:J1One-SchroEqu2018} \end{align} Combining \eqref{eq:J1One-SchroEqu2018} with Lemmas \ref{lemma:FarFieldGoToZero-SchroEqu2018} and \ref{lem:sigmaHatRecErgo-SchroEqu2018}, we have \begin{equation} \label{eq:J1-SchroEqu2018} |J_1|^2 \lesssim (\widehat{\sigma^2}(0) + o(1)) \cdot o(1) = o(1) \to 0, \quad j \to +\infty. \end{equation} The analysis of $J_2$ is similar to that of $J_1$ so we skip the details. Finally, by Lemma \ref{lemma:FarFieldGoToZero-SchroEqu2018}, the $J_3$ can be estimated as \begin{align} |J_3|^2 & \simeq \big| \mathcal{E}_k \big( \overline{\mathbb E u^\infty(\hat x,k)} \cdot \mathbb E u^\infty(\hat x,k+\tau) \big) \big|^2 \nonumber\\ & \leq \frac 1 {K_j} \int_{K_j}^{2K_j} \sup_{\kappa \geq K_j} \big| \mathbb E u^\infty(\hat x,\kappa) \big|^2 \dif{k} \cdot \frac 1 {K_j} \int_{K_j}^{2K_j} \sup_{\kappa \geq K_j+\tau} \big| \mathbb E u^\infty(\hat x,\kappa) \big|^2 \dif{k} \nonumber\\ & = \sup_{\kappa \geq K_j} |\mathbb E u^\infty(\hat x,\kappa)|^2 \cdot \sup_{\kappa \geq K_j+\tau} |\mathbb E u^\infty(\hat x,\kappa)|^2 \to 0, \quad j \to +\infty. \label{eq:J3-SchroEqu2018} \end{align} Combining \eqref{eq:J-SchroEqu2018}, \eqref{eq:J0-SchroEqu2018}, \eqref{eq:J1-SchroEqu2018} and \eqref{eq:J3-SchroEqu2018}, we arrive at \eqref{eq:sigmaHatRecSingle-SchroEqu2018}. Our proof is done. \end{proof} \section{Uniqueness of the potential and the random source} \label{sec:RecPS-SchroEqu2018} In this section, we focus on the recovery of the potential term and the expectation of the random source. Due to the highly nonlinear relation between the total wave and the potential, the active scattering measurements are thus utilized to recover the potential. In the recovery of the potential, the random sample $\omega$ is set to be fixed so that a single realization of the random term $\dot B_x$ is enough to obtain the unique recovery. Different from the recovery of the potential, the uniqueness of the expectation requires all realizations of the random sample $\omega$. \jz{Because the deterministic and random parts of the source are entangled together so that only one realization of the random source cannot reveal exact values of the expectation at each spatial point $x$.} \subsection{Recovery of the potential} Now we are in the position to prove Theorem \ref{thm:UniPot1-SchroEqu2018}. We are to use the incident plane wave, so $\alpha$ is set to be 1 throughout this section. \begin{proof}[Proof of Theorem \ref{thm:UniPot1-SchroEqu2018}] The random sample $\omega$ is assumed to be fixed. Given two direction $d_1$ and $d_2$ of the incident plane waves, we denote the corresponding total wave as $u_{d_1}$ and $u_{d_2}$, respectively. Then, from \eqref{eq:1}, we have \begin{equation} \label{eq:uSubtract-SchroEqu2018} \begin{cases} (-\Delta - k^2)(u_{d_1} - u_{d_2}) = V (u_{d_1} - u_{d_2}) & \\ u_{d_1} - u_{d_2} = e^{ikd_1 \cdot x} - e^{ikd_2 \cdot x} + u_{d_1}^{sc}(x) - u_{d_2}^{sc}(x) & \\ u_{d_1}^{sc}(x) - u_{d_2}^{sc}(x): \text{ SRC} & \end{cases} \end{equation} From \eqref{eq:uSubtract-SchroEqu2018} we have the Lippmann-Schwinger equation, \begin{equation} \label{eq:uLippSchw-SchroEqu2018} \big( I - {\mathcal{R}_{k}} V \big) (u_{d_1} - u_{d_2}) = e^{ikd_1 \cdot x} - e^{ikd_2 \cdot x}. \end{equation} When $k > k^*$, equality \eqref{eq:uLippSchw-SchroEqu2018} gives \[ u_{d_1}^{sc} - u_{d_2}^{sc} = {\mathcal{R}_{k}} V (e^{ikd_1 \cdot x} - e^{ikd_2 \cdot x}) + \sum_{j=2}^\infty ({\mathcal{R}_{k}} V)^j (e^{ikd_1 \cdot x} - e^{ikd_2 \cdot x}). \] Therefore the difference between the far-field patterns is \begin{align} & u^{\infty}(\hat{x},k,d_1) - u^{\infty}(\hat{x},k,d_2) \nonumber\\ = \ & \int_{D} \frac{e^{-ik\hat{x} \cdot y}}{4\pi} V(y) (e^{ikd_1 \cdot y} - e^{ikd_2 \cdot y}) \dif{y} + \sum_{j=1}^\infty \int_{D} \frac{e^{-ik\hat{x} \cdot y}}{4\pi} V(y) ({\mathcal{R}_{k}} V)^j (e^{ikd_1 \cdot (\cdot)} - e^{ikd_2 \cdot (\cdot)}) \dif{y} \nonumber\\ =: & \sqrt{\frac{\pi}{2}} \widehat{V} \big( k(\hat{x} - d_1) \big) - \sqrt{\frac{\pi}{2}} \widehat{V} \big( k(\hat{x} - d_2) \big) + \sum_{j=1}^\infty H_j(k), \label{eq:uFarfied-SchroEqu2018} \end{align} where \begin{equation} \label{eq:Fjk-SchroEqu2018} H_j(k) := \int_{D} \frac{e^{-ik\hat{x} \cdot y}}{4\pi} V(y) ({\mathcal{R}_{k}} V)^j (e^{ikd_1 \cdot (\cdot)} - e^{ikd_2 \cdot (\cdot)}) \dif{y}, \quad j = 1,2, \cdots. \end{equation} For any $p \in \R^3$, when $p = 0$, we let $\hat x = (1,0,0),$ $d_1 = (1,0,0),$ $d_2 = (0,1,0)$; when $p \neq 0$, we can always find a $p^\perp \in \R^3$ which is perpendicular to $p$. Let \begin{equation*} e = p^\perp / \nrm{p^\perp} \quad\text{ and }\quad \left\{\begin{aligned} \hat x & = \sqrt{1 - \nrm{p}^2 / (4k^2)} \cdot e + p / (2k), \\ d_1 & = \sqrt{1 - \nrm{p}^2 / (4k^2)} \cdot e - p / (2k), \\ d_2 & = p/\nrm{p}, \end{aligned}\right. \end{equation*} when $k > \nrm{p}/2$, we have \begin{equation} \label{eq:xhatd1Property-SchroEqu2018} \left\{\begin{aligned} & \hat x, d_1, d_2 \in \mathbb{S}^2, \\ & k(\hat{x} - d_1) = p, \\ & |k(\hat{x} - d_2)| \to \infty ~(k \to \infty). \end{aligned}\right. \end{equation} Note that the choices of these two unit vectors $\hat x$, $d_1$ depend on $k$. For different values of $k$, we pick up different directions $\hat x$, $d_1$ to guarantee \eqref{eq:xhatd1Property-SchroEqu2018}. Then, \begin{equation} \label{eq:Vxd1d2-SchroEqu2018} \sqrt{\frac \pi 2}\widehat{V}(p) = \lim_{k \to +\infty} \big( \sqrt{\frac \pi 2} \widehat{V} (k(\hat{x} - d_1)) - \sqrt{\frac{\pi}{2}} \widehat{V} (k(\hat{x} - d_2)) \big). \end{equation} Combining \eqref{eq:uFarfied-SchroEqu2018}, \eqref{eq:Vxd1d2-SchroEqu2018} and Lemma \ref{lemma:FjkEstimated-SchroEqu2018}, we conclude \begin{equation} \label{eq:PotnFourier-SchroEqu2018} \widehat{V}(p) = \sqrt{\frac{2}{\pi}} \lim_{k \to +\infty} \big( u^{\infty}(\hat{x},k,d_1) - u^{\infty}(\hat{x},k,d_2) \big). \end{equation} Formula \eqref{eq:PotnFourier-SchroEqu2018} completes the proof. \end{proof}
3,617
47,422
en
train
0.71.15
It remains to give the estimates of these high-order terms $H_j(k)$, and this is done by Lemma \ref{lemma:FjkEstimated-SchroEqu2018}. \begin{lem} \label{lemma:FjkEstimated-SchroEqu2018} The sum of high-order terms $H_j(k)$ defined in \eqref{eq:Fjk-SchroEqu2018} satisfies the following estimate, $$\big| \sum_{j \geq 1} H_j(k) \big| \leq C k^{-1},$$ for some constant $C$ independent of $k$. \end{lem} \begin{proof}[Proof of Lemma \ref{lemma:FjkEstimated-SchroEqu2018}] According to Lemma \ref{lemma:RkVBounded-SchroEqu2018}, we have \begin{align*} |H_j(k)| & \lesssim \int_D |V(y)| \cdot \big| [({\mathcal{R}_{k}} V)^{j} e^{ik d_1 \cdot (\cdot)}] (y) \big| \dif{y} + \int_D |V(y)| \cdot \big| [({\mathcal{R}_{k}} V)^{j} e^{ik d_2 \cdot (\cdot)}] (y) \big| \dif{y} \\ & \lesssim \nrm[L^\infty]{V} \cdot |D|^{1/2} \cdot \big( k^{-j} \nrm[L^2(D)]{ e^{ik d_1 \cdot (\cdot)} } + k^{-j} \nrm[L^2(D)]{ e^{ik d_2 \cdot (\cdot)} } \big) \\ & = 2\nrm[L^\infty]{V} \cdot |D| \cdot k^{-j}. \end{align*} Therefore, $$|\sum_{j=1}^{\infty} H_j(k)| \leq \sum_{j=1}^{\infty} |H_j(k)| \leq 2C \nrm[L^\infty]{V} \cdot |D| \cdot \sum_{j=1}^{\infty} k^{-j} \leq C k^{-1}, \quad k \to +\infty.$$ The proof is done. \end{proof} \subsection{Recovery of the random source} The variance function of the random source is recovered in Section \ref{sec:RecVar-SchroEqu2018}, and now we recover its expectation. \begin{proof}[Proof to Theorem \ref{thm:UniSou1-SchroEqu2018}] According to Theorem \ref{thm:UniPot1-SchroEqu2018}, we have the uniqueness of the potential. Assume that two source $f$, $f'$ generate same far-field patterns for all $k > 0$. We denote the restriction on $D$ of the corresponding total waves as $u$ and $u'$. Then, \begin{equation} \label{eq:uuprime-SchroEqu2018} \left\{\begin{aligned} (\Delta + k^2 + V) (\mathbb{E} u - \mathbb{E} u') & = f - f' && \text{ in } D \\ \mathbb{E} u - \mathbb{E} u' = \partial_\nu (\mathbb{E} u) - \partial_\nu (\mathbb{E} u') & = 0 && \text{ on } \partial D \end{aligned}\right. \end{equation} where $\nu$ is the outer normal to $\partial D$. Let test functions $v_k \in H_0^1(D)$ be the weak solutions of the boundary value problem \begin{equation} \label{eq:DiriLap-SchroEqu2018} \left\{\begin{aligned} (-\Delta - V)v_k & = k^2 v_k && \text{ in } D \\ v_k & = 0 && \text{ on } \partial D \end{aligned}\right. \end{equation} for delicately picked $k$. The solutions $v_k$ are eigenvectors of the system \eqref{eq:DiriLap-SchroEqu2018}. From \eqref{eq:uuprime-SchroEqu2018} we have \begin{equation} \label{eq:uuprimev-SchroEqu2018} \int_D (\Delta + V + k^2) (\mathbb{E}u - \mathbb{E}u') \cdot v_k \dif{x} = \int_D (f - f')v_k \dif{x}. \end{equation} Using integral by parts and noting that the $v_k$'s in \eqref{eq:uuprimev-SchroEqu2018} satisfy \eqref{eq:DiriLap-SchroEqu2018}, we have \begin{equation} \label{eq:ffprime-SchroEqu2018} \int_D (f - f')v_k \dif{x} = 0. \end{equation} When $\nrm[L^\infty(D)]{V}$ is less than some constant depending on $D$, the set of eigenvectors $\{v_k\}$ corresponding to different eigenvalues $k^2$ forms an orthonormal basis of $L^2(D)$ [Theorem 2.37, \citen{mclean2000strongly}]. Therefore, from \eqref{eq:ffprime-SchroEqu2018} we conclude that $$f = f' \text{ in } L^2(D).$$ The proof is done. \end{proof} \section{Conclusions} \label{sec:Conclusions-SchroEqu2018} In this paper, we are concerned with a random Schr\"odinger equation. First, the well-posedness of the direct problem is studied. Then, the variance function of the random source is recovered by using a single passive scattering measurement. By further utilizing active scattering measurements under a single realization of the random sample, the potential is recovered. Finally, with the help of multiple realizations of the random sample, the expectation of the random source are recovered. The major novelty of our study is that on the one hand, both the random source and the potential are unknown, and on the other hand, both passive and active measurements are used to recover all of the unknowns. \sq{While the direct problem in this paper is well-formulated in the space $L_{-1/2-\epsilon}^2$, the regularity of the solution of the random Schr\"odinger system is not taken into consideration. A different formulation of the direct problem, which takes the regularity of the solution into consideration, is possible. And this new formulation gives possibility to handle the case where both the source and potential are random. We shall report our finding in this aspect in a forthcoming article. } \sq{ \end{document}
1,799
47,422
en
train
0.72.0
\begin{document} \title{On Coercivity and the Frequency Domain Condition in Indefinite LQ-Control} \pagestyle{myheadings} \markboth{T. Damm, B. Jacob }{Coercivity and Frequency Domain Condition} \begin{abstract} We introduce a coercivity condition as a time domain analogue of the frequency criterion provided by the famous Kalman-Yakubovich-Popov lemma. For a simple stochastic linear quadratic control problem we show how the coercivity condition characterizes the solvability of Riccati equations. \textit{Keywords:} linear quadratic control, Riccati equation, frequency domain condition, stochastic system \textit{MSC2020:} 93C80, 49N10, 15A24, 93E03 \end{abstract} \section{Introduction} \label{sec:introduction} Since the formulation of the Kalman-Yakubovich-Popov-lemma in the 1960s the interplay of time domain and frequency domain methods has always been fruitful and appealing in linear control theory. For the linear-quadratic control problem and the algebraic Riccati equation this has been worked out to a large extent already in \cite{Will71}. However, the applicability of frequency domain methods is mostly limited to linear time-invariant deterministic models. In the consideration of time-varying or stochastic systems it is often necessary to find suitable substitutes. In this note we want to draw the attention to an equivalent formulation of the frequency domain condition, which to our knowledge is not very present in the literature. We call it the {\em coercivity condition}. As our two main contributions, we first establish the equivalence of the coercivity condition and the frequency domain condition and show second that the coercivity condition plays the same role for the solvability of the Riccati equation of a stochastic linear quadratic control problem as the frequency condition does for the corresponding deterministic problem. To simplify the presentation we choose the most simple setup for the stochastic problem. A detailed discussion of the analogous result for time-varying linear systems is to be found in the forthcoming book \cite{HinrPrit}. It is a great honour for us to dedicate this note to Vasile Dr\u{a}gan at the occasion of his 70-th birthday. We had the pleasure to collaborate with Vasile e.g.~in \cite{DragDamm05} and \cite{JacoDrag98}. Vasile Dr\u{a}gan has made numerous and substantial contributions in the context of our topic. Together with Aristide Halanay, he was among the first to study stochastic disturbance attenuation problems \cite{DragHala96, DragHala97, DragHala99}, and in still ongoing work (e.g.\ \cite{DragIvan20}) he extended the theory in many different directions. The textbook \cite{DragMoro13} is closely related to this note. \section{Preliminaries} \label{sec:preliminaries} Consider the time-invariant finite-dimensional linear control system \begin{eqnarray*} \dot x(t)&=&Ax(t)+Bu(t),\quad t\ge 0,\\ x(0)&=&x_0, \end{eqnarray*} together with the {\em quadratic cost functional} \begin{displaymath} J(x_0,u)=\int_0^\infty \left[ \begin{array}{c} x(t)\\u(t) \end{array} \right]^*M \left[ \begin{array}{c} x(t)\\u(t) \end{array} \right]\,dt\;, \end{displaymath} where $A$, $B$ and $M$ are complex matrices of suitable size. Assume that $M=M^*=\left[ \begin{array}{cc} W&V^*\\V&R \end{array} \right]$ where $R>0$, but not necessarily $M\ge 0$ or $W\ge 0$. With these data we associate the {\em algebraic Riccati equation} \begin{equation} \label{eq:are} A^*P+PA+W-(B^*P+V)^*R^{-1}(B^*P+V)=0\;. \end{equation} Moreover, for $\omega\in{\bf R}$ with $\imath\omega\not\in\sigma(A)$ we define the {\em frequency function} (or Popov function, \cite{Popo73}) \begin{equation} \label{eq:freqfunc} \Phi(\omega)=\left[ \begin{array}{c} (\imath \omega I-A)^{-1}B\\I \end{array} \right]^* M\left[ \begin{array}{c} (\imath \omega I-A)^{-1}B\\I \end{array} \right]\;. \end{equation} Here $\sigma(A)$ denotes the spectrum of the matrix $A$. Then the {\em strict frequency domain condition} requires \begin{equation} \label{eq:fdc_strict} \exists \varepsilon>0: \forall\omega\in{\bf R}, \imath\omega\not\in\sigma(A):\quad \Phi(\omega)\ge \varepsilon^2 B ^*(\imath\omega I-A)^* (\imath \omega I-A)^{-1}B\;. \end{equation} The {\em nonstrict frequency domain condition} is just \begin{equation} \label{eq:fdc_nonstrict} \forall\omega\in{\bf R}, \imath\omega\not\in\sigma(A):\quad \Phi(\omega)\ge 0\;. \end{equation} Note that (\ref{eq:fdc_strict}) holds with a given $M$ and fixed $ \varepsilon>0$, if and only if (\ref{eq:fdc_nonstrict}) holds with $M$ replaced by \begin{displaymath} M_ \varepsilon=M-\left[ \begin{array}{cc} \varepsilon^2I&0\\0&0 \end{array} \right]\;. \end{displaymath} \begin{remark}\label{rem:fdc_are} If $(A,B)$ is stabilizable, then it is well-known (e.g.\ \cite{Will71}) that (\ref{eq:are}) possesses a {\em stabilizing solution} (i.e.\ a solution $P$ with the additional property that $\sigma(A-BR^{-1}(B^*P+V))\subset{\bf C}_-$, where ${\bf C}_-$ denotes the open left half plane), if and only if the frequency condition (\ref{eq:fdc_strict}) holds. There exists an {\em almost stabilizing solution} (satisfying $\sigma(A-BR^{-1}(B^*P+V))\subset{\bf C}_-\cup \imath{\bf R}$), if and only if (\ref{eq:fdc_nonstrict}) holds. \end{remark} However, there are other classes of linear systems for which quadratic cost functionals can be formulated, which do not allow for an analogous frequency domain interpretation. These are, for instance, time-varying or stochastic systems e.g.\ \cite{DragMoro13}. It is therefore useful to have a time domain condition which is equivalent to (\ref{eq:fdc_strict}). Such a condition can be obtained by applying the inverse Laplace transformation, but we choose a more elementary approach here. For an initial value $x_0$ and a square-integrable input function $u\in L^2({\bf R}_+)$ we denote by $x(t,x_0,u)$ the unique solution of our time-invariant finite-dimensional linear control system at time $t$. Let \begin{displaymath} U=\{u\in L^2({\bf R}_+)\;\big|\; x(\cdot,0,u)\in L^2({\bf R}_+)\} \end{displaymath} denote the set of {\em admissible inputs}. For $u\in U$ we consider the cost associated to zero initial state \begin{equation} \label{eq:J0u} J(0,u)= \int_0^\infty \left[ \begin{array}{c} x(t,0,u)\\u(t) \end{array} \right]^* M \left[ \begin{array}{c} x(t,0,u)\\u(t) \end{array} \right] \,dt \;. \end{equation} Then we say that $J$ satisfies the {\em strict coercivity condition}, if \begin{equation} \label{eq:ccstrict} \exists \varepsilon >0 : \forall u\in U: \quad J(0,u)\ge \varepsilon^2\|x(\cdot,0,u)\|_{L^2}^2\;. \end{equation} We say that $J$ satisfies the {\em nonstrict coercivity condition}, if \begin{equation} \label{eq:cc} \forall u\in U:\quad J(0,u)\ge 0\;. \end{equation} As for the frequency domain conditions, note that (\ref{eq:ccstrict}) holds with a given $M$ and fixed $ \varepsilon>0$, if and only if (\ref{eq:cc}) holds with $M$ replaced by $M_ \varepsilon$.\\ In the next section, we prove the equivalence of (\ref{eq:fdc_nonstrict}) and (\ref{eq:cc}). Since in the strict cases with given $ \varepsilon>0$ we can replace $M$ by $M_ \varepsilon$ as indicated above, this also establishes the equivalence of (\ref{eq:fdc_strict}) and (\ref{eq:ccstrict}). Then, in Section \ref{sec:stoch-lq-contr}, we show for a stochastic LQ-problem that (\ref{eq:ccstrict}) is a natural time domain replacement of (\ref{eq:fdc_strict}).
2,583
8,734
en
train
0.72.1
\section{Equivalence of frequency domain and coercivity condition} In this section we consider the system $\dot x=Ax+Bu$ and we assume that the pair $(A,B)\in{\bf C}^{n\times n}\times {\bf C}^{n\times m}$ is stabilizable. Solutions with initial value $x(0)=x_0$ and input $u\in L^2({\bf R}_+)$ are denoted by $x(\cdot,x_0,u)$. As above, let $$U=\{u\in L^2({\bf R}_+)\;\big|\; x(\cdot,0,u)\in L^2({\bf R}_+) \}$$ be the set of admissible inputs and let $M\in{\bf C}^{(n+m)\times(n+m)}$ be a weight matrix of the form $M=M^*=\left[ \begin{array}{cc} W&V^*\\V&R \end{array} \right]$ where $R>0$. \begin{theorem} The following statements are equivalent. \begin{itemize} \item[(a)] For all $u\in U$ it holds that \begin{displaymath} J(0,u)= \int_0^\infty \left[ \begin{array}{c} x(t,0,u)\\u(t) \end{array} \right]^* M \left[ \begin{array}{c} x(t,0,u)\\u(t) \end{array} \right] \,dt \ge 0\;. \end{displaymath} \item[(b)] For all $\omega\in{\bf R}$ with $\imath\omega\not\in\sigma(A)$ it holds that \begin{displaymath} \Phi(\omega)=\left[ \begin{array}{c} (\imath \omega I-A)^{-1}B\\I \end{array} \right]^* M\left[ \begin{array}{c} (\imath \omega I-A)^{-1}B\\I \end{array} \right] \ge 0\;. \end{displaymath} \end{itemize} \end{theorem} \bf Proof: \nopagebreak \rm (a)$\Rightarrow$(b) Let $\eta\in{\bf C}^m$ be arbitrary and $\omega>0$, $\imath\omega\not\in\sigma(A)$. We have to show that $\eta^*\Phi(\omega)\eta\ge 0$. Let $\xi=(\imath\omega I-A)^{-1}B\eta$. Then $\xi$ is reachable from $0$ and there exists a control input $u_0\in L^2([0,1])$ such that $x(1,0,u_0)=\xi$. Since $(A,B)$ is stabilizable, there also exists $u_\infty\in L^2({\bf R}_+)$ with $x(\cdot,\xi,u_\infty)\in L^2({\bf R}_+)$. For $k\in {\bf N}$, $k>0$, and $T_k=\frac{2k\pi}\omega+1 $, we define \begin{eqnarray*} u_k(t)&=&\left\{ \begin{array}{ll} u_0(t)&t\in[0,1[\\ \eta e^{\imath\omega (t-1)}&t\in[1,T_k]\\ u_\infty(t-T_k)& t\in\;]T_k,\infty[ \end{array} \right.\,. \end{eqnarray*} Then $x(1,0,u_k)=\xi$. An easy calculation shows that on $[1,T_k]$ we have the resonance solution $x(t,0,u_k)=\xi e^{\imath\omega (t-1)}$ with $x(T_k,0,u_k)=\xi$, such that it is stabilized by $u_\infty$ on $]T_k,\infty[$. The integrals \begin{eqnarray*} &&\int_0^1 \left[ \begin{array}{c} x(t,0,u_k)\\u_k(t) \end{array} \right]^* M \left[ \begin{array}{c} x(t,0,u_k) \\u_k(t) \end{array} \right] \,dt\\&&+\int_{T_k}^\infty \left[ \begin{array}{c} x(t,0,u_k) \\u_k(t) \end{array} \right]^* M \left[ \begin{array}{c} x(t,0,u_k) \\u_k(t) \end{array} \right] \,dt=c <\infty \end{eqnarray*} are independent of $k$. By (a) we have \begin{eqnarray*} 0&\le& J(0,u_k)=c+\int_1^{T_k} \left[ \begin{array}{c} x(t,0,u_k)\\u_k(t) \end{array} \right]^* M \left[ \begin{array}{c} x(t,0,u_k)\\u_k(t) \end{array} \right] \,dt\\ &=&c+\int_1^{T_k}\left[ \begin{array}{c} \xi e^{\imath\omega t}\\\eta e^{\imath\omega t} \end{array} \right]^* M \left[ \begin{array}{c} \xi e^{\imath\omega t}\\\eta e^{\imath\omega t} \end{array} \right] \,dt\\ &=&c+\int_1^{T_k}\left[ \begin{array}{c} \xi \\\eta \end{array} \right]^* M \left[ \begin{array}{c} \xi \\\eta \end{array} \right] \,dt\\ &=&c+\int_1^{T_k}\eta^*\left[ \begin{array}{c} (\imath \omega I-A)^{-1}B\\I \end{array} \right]^* M\left[ \begin{array}{c} (\imath \omega I-A)^{-1}B\\I \end{array} \right]\eta \,dt \;. \end{eqnarray*} Since $T_k$ can be arbitrarily large, the integrand must be nonnegative. This proves (b).\\ (b)$\Rightarrow$(a) Note first that \begin{equation} \left[ \begin{array}{c} \xi\\\eta \end{array} \right]^*M \left[\begin{array}{c} \xi\\\eta \end{array} \right]\ge 0, \mbox{ if } (\imath\omega I-A) \xi=B\eta \mbox{ for some } \omega\in{\bf R}.\label{eq:initial_observation} \end{equation} Let now $u\in U$ be given and assume by way of contradiction that $J(0,u)<0$. For $T>0$, $x_0\in{\bf C}^n$, we set $$J_T(x_0,u)= \int_0 ^T \left[ \begin{array}{c} x(t,x_0,u)\\u(t) \end{array} \right]^* M \left[ \begin{array}{c} x(t,x_0,u)\\u(t) \end{array} \right] \,dt \;.$$ Then there exists $\delta>0,T_0>1$ such that $J_{T-1}(0,u)<-2\delta$ for all $T\ge T_0$. For $x_T=x(T-1,0,u)$ there exists a control input $u_T\in L^2([0,1])$ such that $x(1,x_T,u_T)=0$. In fact, one can choose $u_T(t)=-e^{A ^*(1-t)}P_1^\dagger e^Ax_T$, where $P_1$ denotes the finite-time controllability Gramian over the interval $[0,1]$ and $P_1^\dagger$ its Moore-Penrose pseudoinverse. Then, e.g.\ \cite{BennDamm11}, $$\|u_T\|_{L^2([0,1])}^2=x_T ^*e^{A ^*}P_1^\dagger e^Ax_T={\cal O}(\|x_T\|^2)\mbox{ for } x_T\to 0\;. $$ This implies that also $J_1(x_{T},u_{T})={\cal O}(\|x_T\|^2)$. Since $x(\cdot,0,u)\in L^2({\bf R}_+)$, we can fix $T>T_0$ such that $\|x_{T}\|$ is small enough to ensure $J_1(x_{T},u_{T})<\delta$. We concatenate $u\big|_{[0,T-1]}$ and $u_T$ to a new input $\tilde u\in L^2([0,T])$ with \begin{eqnarray*} \tilde u(t)&=&\left\{ \begin{array}{ll} u(t) & t\in[0,T-1[\\ u_T(t-T+1)& t\in[T-1,T] \end{array} \right.\;. \end{eqnarray*} By construction, we have \begin{equation} J_T(0,\tilde u)<-\delta<0\quad\mbox{ and }\quad 0=x(0,0,\tilde u) =x(T,0,\tilde u).\label{eq:JT} \end{equation} By definition $\tilde u, x\in L^2([0,T])$, and the equation $\dot x= Ax +B\tilde u$ implies that $x$ is absolutely continuous and $\dot x\in L^2([0,T])$. Thus, on $[0,T]$, the Fourier series of $\tilde u$, $x$ and $\dot x$ converge in $ L^2([0,T])$ to $\tilde u$, $x$ and $\dot x$, respectively. On $[0,T]$, let \begin{displaymath} x(t,0,u)=\sum_{k=-\infty}^\infty \xi_ke^{\imath\frac{2\pi k t}T}\quad\mbox{ and }\quad\tilde u(t)=\sum_{k=-\infty}^\infty \eta_ke^{\imath\frac{2\pi k t}T}. \end{displaymath} Then we get \begin{eqnarray} \nonumber \sum_{k=-\infty}^\infty B\eta_ke^{\imath\frac{2\pi k t}T} =B\tilde u(t) &=&\dot x(t,0,\tilde u)-Ax(t,0,\tilde u)\\&=&\sum_{k=-\infty}^\infty \left(\imath\frac{2\pi}T k I-A\right)\xi_ke^{\imath\frac{2\pi k t}T}\;. \label{eq:formalderivative} \end{eqnarray} Note that the periodicity condition $x(0)=x(T)$ in (\ref{eq:JT}) justifies the formal differentiation of the Fourier series in (\ref{eq:formalderivative}), e.g.\ \cite[Theorem 1]{Tayl44}.\\ Comparing the coefficients in (\ref{eq:formalderivative}), we have \begin{equation} \label{eq:xikBetak} \left(\imath\frac{2\pi}T k I-A\right)\xi_k=B\eta_k\;. \end{equation} In the expression of $J_T(0,\tilde u)$ we replace $x(t,0,\tilde u)$ and $\tilde u(t)$ by their Fourier-series representations. Exploiting orthogonality we have \begin{eqnarray*} J_T(0,\tilde u)&=& T\sum_{k=-\infty}^\infty \left[ \begin{array}{c} \xi_k\\\eta_k \end{array} \right]^*M \left[\begin{array}{c} \xi_k\\\eta_k \end{array} \right]. \end{eqnarray*} Together with (\ref{eq:xikBetak}) and (\ref{eq:initial_observation}) this implies $J_T(0,\tilde u)\ge 0$ contradicting the first condition in (\ref{eq:JT}). Thus our initial assumption was wrong, and we have shown that (b) implies (a).\eprf
3,285
8,734
en
train
0.72.2
\section{An indefinite stochastic LQ-control problem} \label{sec:stoch-lq-contr} Consider the It\^o-type linear stochastic system \begin{equation} \label{eq:Ito} dx=(Ax+Bu)\,dt + Nx\,dw\;. \end{equation} Here $w$ is a Wiener process and by $L^2_w$ we denote the space of square integrable stochastic processes adapted to $L^2_w$. For the appropriate definitions see textbooks such as \cite{Arno73, DragMoro13}. Let further the cost functional \begin{equation} \label{eq:JWR} J(x_0,u)={\bf E}\int_0^\infty \left[ \begin{array}{c} x(t,x_0,u)\\u(t) \end{array} \right]^* M \left[ \begin{array}{c} x(t,x_0,u)\\u(t) \end{array} \right] \,dt \end{equation} be given, where ${\bf E}$ denotes expectation. \\ For simplicity of presentation let $M=\left[ \begin{array}{cc} W&0\\0&I \end{array} \right]$ which can always be achieved by a suitable transformation, if the lower right block of $M$ is positive definite, e.g.\ \cite[Section 5.1.7]{Damm04}. We do not impose any definiteness conditions on $W$. Note that we might include further noise processes or control dependent noise in (\ref{eq:Ito}) at the price of increasing the technical burden. \begin{definition} Equation (\ref{eq:Ito}) is {\em internally mean square asymptotically stable}, if for all initial conditions $x_0$ the uncontrolled solution converges to zero in mean square, that is ${\bf E}\|x(t,x_0,0)\|^2\to 0$ for $t\to\infty$. In this case, for brevity, we also call the pair $(A,N)$ {\em asymptotically stable}. We call an input signal $u\in L^2_w$ admissible, if also $x(\cdot,0,u)\in L^2_w$. \end{definition} It is well known, that the pair $(A,N)$ is asymptotically stable, if and only if \begin{displaymath} \sigma(I\otimes A+A\otimes I+N\otimes N)\subset{\bf C}_-\;, \end{displaymath} where $\otimes$ denotes the Kronecker product, \cite{Klei69}. With (\ref{eq:Ito}) and (\ref{eq:JWR}) we associate the algebraic Riccati equation \begin{equation} \label{eq:Riccati} A ^*P+PA+N ^*PN+W-PBB ^*P=0\;. \end{equation} \begin{definition} A solution $P$ of (\ref{eq:Riccati}) is {\em stabilizing}, if the pair $(A-BB ^*P,N)$ is asymptotically stable. We call the triple $(A,N,B)$ {\em stabilizable}, if there exists a matrix $F$, such that $(A+BF,N)$ is asymptotically stable. \end{definition} We now relate the existence of stabilizing solutions of (\ref{eq:Riccati}) to a coercivity condition. Recall from Remark \ref{rem:fdc_are} that the frequency condition is used for this purpose in the deterministic case. For stochastic systems, however, there is no obvious way to define a transfer function. \begin{theorem} Let $(A,N,B)$ be {\em stabilizable}.\\ The Riccati equation (\ref{eq:Riccati}) possesses a stabilizing solution, if and only if for some $ \varepsilon>0$ and all admissible $u$ the coercivity condition $J(0,u)\ge \varepsilon\|x\|_{L_w^2}^2$ holds. \end{theorem} \bf Proof: \nopagebreak \rm We develop the proof along results available in the literature.\\ Let $W=W_1-W_2$, where both $W_1,W_2>0$, and consider first the definite LQ-problem with the cost functional \begin{displaymath} J_{W_1}(x_0,u)={\bf E}\int_0^\infty (x ^*W_1x+\|u\|^2)\,dt\;. \end{displaymath} Then it is known from \cite{Wonh68}, that a minimizing control $u_1$ for $J_{W_1}$ is given in the form $u_1=Fx=-B ^*P_1x$, where $P_1$ is the unique stabilizing solution of the Riccati equation \begin{equation} \label{eq:Riccati1} A ^*P+PA+N ^*PN+W_1-PBB ^*P=0\;. \end{equation} For a control of the form $u=-B ^*P_1x+u_2$ it follows that \begin{equation}\label{eq:J1x0} J(x_0,u)=x_0 ^*Px_0+{\bf E}\int_0^\infty \left(\|u_2(t)\|^2-x(t) ^*W_2x(t)\right)\,dt\;, \end{equation} where now $x(t)$ is the solution of the closed loop equation \begin{equation}\label{eq:P1closed} dx=(A-BB ^*P_1)x\,dt+Nx\,dw+Bu_2\,dt\; \end{equation} with initial value $x_0$. Our next goal is to minimize \begin{displaymath} J_{W_2}(x_0,u_2)={\bf E}\int_0^\infty \left(\|u_2(t)\|^2-x(t) ^*W_2x(t)\right)\,dt \end{displaymath} subject to (\ref{eq:P1closed}). If we factorize $W_2=C_2 ^*C_2$ and set $y(t)=C_2x(t)$, then we recognize $J_{W_2}$ as the cost functional related to the stochastic bounded real lemma, \cite[Theorem 2.8]{HinrPrit98}, see also e.g.\ \cite{DragMoro13}. The associated Riccati {\em inequality} \begin{equation} \label{eq:Riccati2} (A-BB ^*P_1) ^*P+P(A-BB ^*P_1)+N ^*PN+W_2-PBB ^*P>0\;, \end{equation} possesses a solution $\hat P<0$, if and only if there exists a $\delta>0$, such that \begin{equation}\label{eq:BRL_condition} J_{W_2}(0,u_2)\ge\delta\|u_2\|_{L^2_w}^2\mbox{ for all } u_2\in {L^2_w} \end{equation} see \cite[Corollary 2.14]{HinrPrit98}. By \cite[Theorem 5.3.1]{Damm04} this is equivalent to the corresponding Riccati {\em equation} having a stabilizing solution $P_2<0$. \\ Note now that for $P=P_1+P_2$, the Riccati equation (\ref{eq:Riccati}) holds because \begin{eqnarray*} 0&=&(A-BB ^*P_1) ^*P_2+P_2(A-BB ^*P_1)+N ^*P_2N+W_2-P_2BB ^*P_2\\ &=&A ^*P_2+P_2A+N ^*P_2N-W_2-PBB ^*P+P_1BB ^*P_1\\ &=&A ^*P+PA+N ^*PN+W-PBB ^*P\;. \end{eqnarray*} Moreover, the pair $(A-BB ^*P,N)=(A-BB ^*P_1-BB ^*P_2,N)$ is stabilizing. It remains to show that (\ref{eq:BRL_condition}) is equivalent to the coercivity condition. As above, let $u\in {L^2_w}$ be of the form $u=-B ^*P_1x+u_2$. Assume first that the coercivity condition holds. By (\ref{eq:J1x0}) we have \begin{displaymath} J(0,u)=J_{W_2}(0,u_2)=\|u_2\|_{L^2_w}^2-\|y\|_{L^2_w}^2\ge \varepsilon^2 \|x\|_{L^2_w}^2\;, \end{displaymath} where $x$ solves (\ref{eq:P1closed}) and $y=C_2x$. It follows that $\|x\|_{L^2_w}\le \frac1{\|C_2\|}\|y\|_{L^2_w}$, whence \begin{displaymath} \|u_2\|_{L^2_w}^2\ge \left(1+\frac{ \varepsilon^2}{\|C_2\|^2}\right) \|y\|_{L^2_w}^2=\alpha \|y\|_{L^2_w}^2 \end{displaymath} with $\alpha>1$. Hence, with $\delta^2=1-\frac1\alpha>0$, we have \begin{displaymath} J_{W_2}(0,u_2)= \|u_2\|_{L^2_w}^2-\|y\|_{L^2_w}^2=\frac1\alpha \|u_2\|_{L^2_w}^2-\|y\|_{L^2_w}^2+\delta^2 \|u_2\|_{L^2_w}^2\ge \delta^2 \|u_2\|_{L^2_w}^2 \end{displaymath} for all $u_2\in L^2_w$, which is (\ref{eq:BRL_condition}). Vice versa, assume (\ref{eq:BRL_condition}). Since (\ref{eq:P1closed}) is asymptotically stable, the system has finite input to state gain $\gamma$, such that $\|x\|_{L^2_w}\le \gamma \|u_2\|_{L^2_w}$. Therefore \begin{displaymath} J(0,u)=J_{W_2}(0,u_2)\ge \frac{\delta^2}{\gamma^2} \|x\|_{L^2_w}^2= \varepsilon^2\|x\|_{L^2_w}^2 \end{displaymath} with $ \varepsilon=\frac{\delta}{\gamma}$. This is the coercivity condition. \eprf \section{Conclusion} \label{sec:conclusion} We have provided a time domain substitute for the frequency domain condition of the Kalman-Yakubovich-Popov lemma. The equivalence of the two criteria has been proven and the applicability has been demonstrated for a stochastic linear quadratic control problem. \end{document}
2,866
8,734
en
train
0.73.0
\begin{document} \title{The Use of Deep Learning for Symbolic Integration \ A Review of (Lample and Charton, 2019)} \begin{abstract} Lample and Charton (2019) describe a system that uses deep learning technology to compute symbolic, indefinite integrals, and to find symbolic solutions to first- and second-order ordinary differential equations, when the solutions are elementary functions. They found that, over a particular test set, the system could find solutions more successfully than sophisticated packages for symbolic mathematics such as Mathematica run with a long time-out. This is an impressive accomplishment, as far as it goes. However, the system can handle only a quite limited subset of the problems that Mathematica deals with, and the test set has significant built-in biases. Therefore the claim that this outperforms Mathematica on symbolic integration needs to be very much qualified. \end{abstract} Lample and Charton (2019) describe a system (henceforth LC) that uses deep learning technology to compute symbolic, indefinite integrals, and to find symbolic solutions to first- and second-order ordinary differential equations, when the solutions are elementary functions (i.e. compositions of the arithmetic operators with the exponential and trigonometric functions and their inverses). They found that, over a particular test set, LC could find solutions more successfully than sophisticated packages for symbolic mathematics such as Mathematica given a long time-out. This is an impressive accomplishment; however, it is important to understand its scope and limits. We will begin by discussing the case of symbolic integration, which is simpler. Our discussion of ODE's is much the same; however, that introduces technical complications that are largely extraneous to the points we want to make. \section{Symbolic integration} There are three categories of computational symbolic mathematics that are important here: \begin{itemize} \item {\bf Symbolic differentiation.} Using the standard rules for differential calculus, this is easy to program and efficient to execute. \item {\bf Symbolic integration} This is difficult. In most cases, the integral of an elementary function that is not extremely simple is not, itself, an elementary function. In principle, the decision problem whether the integral of an elementary function is itself elementary is undecidable (Richardson, 1969). Even in the cases where the integral is elementary, finding it can be very difficult. Nonetheless powerful modules for symbolic integration have been incorporated in systems for symbolic math like Mathematica, Maple, and Matlab. \item {\bf Simplification of symbolic expressions.} The decision problem of determining whether an elementary expression is identically equal to zero is undecidable (Richardson, 1969). Symbolic math platforms incorporate powerful modules, but but building a high-quality system is a substantial undertaking. \end{itemize} If one can, in one way or another, conjecture that the integral of elementary function $f$ is elementary function $g$ (both functions being specified symbolically) then verifying that conjecture involves, first computing the derivative $h = f^{\prime}$ and, second, determining that the expression $h-g$ simplifies to 0. As we have stated, the first step is easy; the second step is hard in principle but often reasonably straightforward in practice. Given an elementary expression $f$, finding an elementary symbolic integral is, in general, a search in an enormous and strange state space for something that most of the time does not even exist. Even if you happen to know that it exists, as is the case with the test examples used by Lample and Charton, it remains a very hard problem. \section{What LC does and how it works} At a high level, LC works as follows: \begin{itemize} \item A large corpus of examples (80 million) was created synthetically by generating random, complex pairs of symbolic expressions and their derivatives. We will discuss below how that was done. \item A seq2seq transformer model is trained on the corpus. \item At testing time, given a function $g$ to integrate, the model was executed, using a beam search of width either 1, 10, or 50. An answer $f$ produced by the model was checked using the procedure described above: the symbolic differentiator was applied to $f$, and then the symbolic simplifier tested whether $f'=g$. \end{itemize} In effect, the process of integration is being treated as something like machine translation: the source is the integrand, the target is the integral. Three techniques were used to generate integral/derivative pairs: \begin{itemize} \item {\bf Forward generation} (FWD). Randomly generate a symbolic function; give it to a preexisting symbolic integrator; if it finds an answer, then record the pair. 20 million such pairs were created. This tends to generate pairs with comparatively small derivative and large integrals, in terms of the size of the symbolic expression. \item {\bf Backward generation} (BWD). Randomly generate a symbolic function; compute its derivative; and record the pair. 40 million such pairs were created. The form of the derivative is simplified by symbolic techniques before the pair is recorded. This approach tends to generate pairs with comparatively small integrals and derivatives that are almost always much larger. \item {\bf Integration by parts} (IBP). If, for some functions $f$ and $G$, LC has computed that the integral of $f$ is $F$ and that the integral of the product $fG=H$, then, by the rule of integration by parts. \[ \int Fg = FG - H \] where $g$ is the derivative of $G$. It can now record $Fg$ and its integral as a new pair. 20 million such pairs were created. \end{itemize} The comparisons to Mathematica, Maple, and Matlab were carried out using entirely items generated by BWD, They found that LC was able to solve a much higher percentage of this test set than Mathematica, Maple, or Matlab giving an extended time out to all of these. Mathematica, the highest scoring of these, was able to solve 84\% of the problems in the test set, whereas, running with a beam size of 1, LC produced the correct solution 98.4\% of the time. \section{No integration without simplification!} There are many problems where it is critical to simplify an integrand before carrying out the process of integration. Consider the following integral: \begin{equation} \int \sin^{2}(e^{e^x}) + \cos^{2}(e^{e^x}) \: dx \end{equation} At first glance, that looks scary, but in fact it is just a ``trick question'' that some malevolent calculus teacher might pose. The integrand is identically equal to 1, so the integral is $x+c$. Obviously, one can spin out examples of this kind to arbitrary complexity. The reader might enjoy evaluating this (2) \[ \int \sin(e^{x}+\frac{e^{2x}-1}{2\cos^{2}(\sin(x))-1)}) - \cos(\frac{(e^{x}+1)(e^{x}-1)}{\cos(2\sin(x))})\sin(e^{x}) - \cos(e^{(x^{3}+3x^{2}+3x+1)^{1/3}-1}) \sin(\frac{e^{2x}-1}{1-2\sin^{2}(\sin(x))}) \] or not. Anyway, rule-based symbolic systems can and do quite easily carry out a sequence of transformations to do the simplification here This raises two issues: \begin{itemize} \item It is safe to assume that few examples of this form, of any significant complexity, were included in Lample and Charton's corpus. BWD and IBP cannot possibly generate these. FWD could, in principle, but the integrand would have to be generated at random, which is extremely improbable. Therefore, LC was not tested on them. \item Could LC have found a solution to such a problem if it were tested on one? The answer to that question is certainly yes, if the test procedure begins by applying a high quality simplifier and reduces the integrand to a simple form. Alternatively, the answer is also yes if, with integral (1), LC proposes ``$f(x)=x$'' as a candidate solution then the simplifier verifies that, indeed, $f'$ is equal to the complex integrand. It does seem rather unlikely that LC, operating on the integrand in equation (1), would propose $f(x)=x$ as a candidate. If one were to construct a problem where a mess like integral (2) has some moderately complicated solution --- say, $\log(x^{2}/\sin(x))$ --- which, of course, is easily done, the likelihood that LC will find it seems still smaller; though certainly there is no way to know until you try. \end{itemize} Fran\c{c}ois Charton informs me (personal communication) that in fact LC did not do simplifications at this stage and therefore would not have been able to solve these problem. This is not a serious strike against their general methodology. The problem is easily fixed; they can add easily add calls to the simplifier at the appropriate steps, and they could automatically generate examples of this kind for their corpus by using an ``uglifier'' that turns simple expressions into equivalent complicated ones. But the point is that there a class of problems whose solution inherently requires a high quality simplifier, and which currently is not being tested. One might object that problems of this form are very artificial. But the entire problem that LC addresses is very artificial. If there is any natural application that tends to generate lots of problems of integrating novel complicated symbolic expressions with the property that a significant fraction of those have integrals that are elementary functions, I should like to hear about it. As far as I know, the problem is purely of mathematical interest. Another situation where simplification is critical: Suppose that you take an integrand produced by BWD and make some small change. Almost certainly, the new function $f$ has no elementary integral. Now give it to LC. LC will produce an answer $g$, because it always produces an answer, and that answer will be wrong, because there is no right answer. In a situation where you actually cared about the integral of $f$, it would be undesirable to accept $g$ as an answer. So you can add a check; you differentiate $g$ and check whether $g'=f$. But now you are checking for the equivalence of two very complicated expressions, and again you would need a very high-powered simplifier. \section{Differential equations} Lample and Charton have developed a very ingenious technique in which you can input any elementary function $f(x,c)$ with a single occurrence of parameter $c$, and find a an ODE whose solution is $f(x,c)$ where $c$ is the free parameter of the integral. They also can do the corresponding thing for second-order equations. Their overall procedure was then essentially the same as for integrals: They generated a large corpus of pairs of equations and solutions, and trained a seq2seq neural network. At testing time, LC used the neural network to carry out a beam search which generated candidates; each candidate was tested to see whether it was a solution to the problem. Here the results were more mixed. With first-order equations, LC with a beam size of 1 comes out slightly ahead of Mathematica (81.2\% to 77.2\%) with a beam size of 50, it comes out well ahead (97\%). With second order equations, LC with a beam size of 1 does not do as well as Mathematica (40.8\% to 61.6\%) but with a beam size of 50 it attains 81.0\%. The concerns that we have raised in the context of integration apply here as well, suitably adapted. \section{Special functions} Systems such as Mathematica, Maple, and Matlab are able to solve symbolically many symbolic integration problems and many differential equations in which the solution is a special function (i.e. a non-elementary function with a standard name). For instance Mathematica can integrate the function $\log(1-x/x)$ to get the answer $-$PolyLog(2,x), It can solve the equation \[ x^{2}+y^{\prime \prime}(s) + xy^{\prime}(x) +(x^{2}-16)y(x) = 0 \] to find the solution $y(x) = c_{1}\mbox{BesselJ}(4,x) + c_{2}\mbox{BesselY}(4,x)$ In principle, LC could be extended to handle these. In FWD, it would be a matter of including the pairs where the automated integrator being called generates an expression with a special function. In BWD, it would be a matter of generating expressions with special functions and computing their derivatives. With integration, the impact on performance might be small; special functions that are integrals of elementary functions are mostly unary (though PolyLog is binary), and therefore have only a moderate impact on the size of the state space. But the ODE solver is a different matter; many of the functions that arise in solving ODEs, such as the many variants of Bessel functions, are binary, and adding expressions that include these expands the search space exponentially. To put the point another way: When the ODE solver was tested, Mathematica was searching through a space of solutions that are includes the special functions, whereas LC was limited to the much smaller space of the elementary functions. The tests were designed so that the solution was always in the smaller space. LC thus had an entirely unfair advantage.
3,480
6,120
en
train
0.73.1
\section{Differential equations} Lample and Charton have developed a very ingenious technique in which you can input any elementary function $f(x,c)$ with a single occurrence of parameter $c$, and find a an ODE whose solution is $f(x,c)$ where $c$ is the free parameter of the integral. They also can do the corresponding thing for second-order equations. Their overall procedure was then essentially the same as for integrals: They generated a large corpus of pairs of equations and solutions, and trained a seq2seq neural network. At testing time, LC used the neural network to carry out a beam search which generated candidates; each candidate was tested to see whether it was a solution to the problem. Here the results were more mixed. With first-order equations, LC with a beam size of 1 comes out slightly ahead of Mathematica (81.2\% to 77.2\%) with a beam size of 50, it comes out well ahead (97\%). With second order equations, LC with a beam size of 1 does not do as well as Mathematica (40.8\% to 61.6\%) but with a beam size of 50 it attains 81.0\%. The concerns that we have raised in the context of integration apply here as well, suitably adapted. \section{Special functions} Systems such as Mathematica, Maple, and Matlab are able to solve symbolically many symbolic integration problems and many differential equations in which the solution is a special function (i.e. a non-elementary function with a standard name). For instance Mathematica can integrate the function $\log(1-x/x)$ to get the answer $-$PolyLog(2,x), It can solve the equation \[ x^{2}+y^{\prime \prime}(s) + xy^{\prime}(x) +(x^{2}-16)y(x) = 0 \] to find the solution $y(x) = c_{1}\mbox{BesselJ}(4,x) + c_{2}\mbox{BesselY}(4,x)$ In principle, LC could be extended to handle these. In FWD, it would be a matter of including the pairs where the automated integrator being called generates an expression with a special function. In BWD, it would be a matter of generating expressions with special functions and computing their derivatives. With integration, the impact on performance might be small; special functions that are integrals of elementary functions are mostly unary (though PolyLog is binary), and therefore have only a moderate impact on the size of the state space. But the ODE solver is a different matter; many of the functions that arise in solving ODEs, such as the many variants of Bessel functions, are binary, and adding expressions that include these expands the search space exponentially. To put the point another way: When the ODE solver was tested, Mathematica was searching through a space of solutions that are includes the special functions, whereas LC was limited to the much smaller space of the elementary functions. The tests were designed so that the solution was always in the smaller space. LC thus had an entirely unfair advantage. \section{The Test Set} There are also issues with the test set. The comparison with Mathematica, Matlab, and Maple used a test set consisting entirely of problems generated by BWD (problems generated by FWD by definition can be solved by symbolic integrators). These inevitably tend to have comparatively small integrals (in expression size) and long integrals. Unless you are very lucky, or unless an expression is full of addition and subtraction, the derivative of an expression of size $n$ has length $\Omega(n^2)$. For example the derivative of the function $\sin(\sin(\sin(\sin(x))))$ is \[ \cos(\sin(\sin(\sin(x)))) \cdot \cos(\sin(\sin(x))) \cdot \cos(\sin(x)) \cdot \cos(x) \] And in fact the average length of an integrand in the test set was 70 symbols with a standard deviation of 47 symbols; thus, a large fraction of the test examples had 120 symbols or so. (Table 1 of Lample and Charton). So what the comparison with Mathematica establishes is that, given a really long expression, which happens to have a much shorter, exact symbolic integral, LC is awfully good at finding it. But that is a really special class. One can certainly understand why the teams building Mathematica and so on have not considered this niche category of problem much of a priority. Another point that does not seem to have been tested is whether LC may have been picking up on arbitrary artifacts of the differentiation process, such as the order in which parts of a derivative are presented. For instance, the derivative of a three level composed function $f(g(h(x)))$ is a product of three terms $h'(x) \cdot g'(h(x)) \cdot f'(g(h(x))$. Any particular symbolic differentiator will probably generate these in a fixed order, such as the one above. This particular choice of orderings will then be consistent thoughout the corpus, so LC will be trained and then tested only with this ordering. A system like LC may have much more difficulty finding the integral if the multiplicands are presented in any of the five other possible orders. The techniques that LC learns from BWD and FWD are very different. If LC is trained only on BWD and tested on problems in FWD, then running with a beam size of 1, it finds the correct solution to a problem in FWD only 18.9\% of the time; with a beam size of 50, the correct solution is among its top 50 candidates only 27.5\% of the time. Training it only on problems in FWD and testing it on problems in BWD it does even worse, with corresponding success rates of 10.9\% and 17.2\%. The procedure for generating corpus example will not succeed in creating examples that combine features from FWD and BWD. For instance, if $f', f$ is a pair that would be naturally generated by FWD, and $g', g$ is a pair that would be naturally generated by BWD, then the sum $f'+g', f+g$ will not be included in the corpus, and therefore will not be tested. In fact: If one were to put together a test set of random, enormously complex integrands, LC would certainly give a wrong answer on nearly all of them, because only a small fraction would have an elementary integral. Mathematica, certainly, would also fail to find an integral, but presumably it would not give a wrong answer; it would either give up or time out. If you consider that a wrong answer is worse than no answer, then on this test set, Mathematica would beat LC by a enormous margin. \section{Summary} The fact that LC ``beat'' Mathematica on the test set of integration problems produced by BWD is certainly impressive. But Lample and Charton's claim \begin{quote} [This] transformer model \ldots can perform extremely well both at computing function integrals and and solving differential equations, outperforming \ldots Matlab or Mathematica \ldots . \end{quote} is very much overstated, and requires significant qualification. The correct statement, as regards integration, is as follows: \begin{quote} The transformer model outperforms Mathematica and Matlab in computing symbolic indefinite integrals of enormously complex functions of a single variable `$x$' whose integral is a much smaller elementary function containing no constant symbols other than the integers $-5$ to 5. \end{quote} Since both BWD and FWD were limited to functions of a single variable $x$, it is unknown whether LC can handle $\int t \: dt$ or $\int a \: dx$ (it's not clear whether LC's input includes any way to specify the variable of integration) and essentially certain that it cannnot handle $\int 1/(x^{2} + a^{2}) \: dx$. On problems like these, far from outperforming Mathematica and Matlab, it falls far short of a high-school calculus student. It is important to emphasize that {\em the construction of LC is entirely dependent on the pre-existing symbolic processors developed over the last 50 years by experts in symbolic mathematics.} Moreover, as things now stand, extending LC to fill in some of its gaps (e.g. the simplification problems described in section 3) would make it even less of a stand-alone system and more dependent on conventional symbolic processors. There is no reason whatever to suppose that NN-based systems will supercede symbolic mathematics systems any time in the foreseeable future. It goes without saying that LC has no understanding of the significance of an integral or a derivative or even a function or a number. In fact, occasionally, it outputs a solution that is not even a well-formed expression. LC is like the worst possible student in a calculus class: it doesn't understand the concepts, it doesn't learned the rules, it has no idea what is the significance of what it is doing, but it has looked at 80 million examples and gotten a feeling of what integrands and their integrals look like. Finally LC, like the recent successes in game-playing AI, depends on the ability to generate enormous quantities of high-quality (in the case of LC, flawless) synthetic labelled data. In open-world domains, this is effectively impossible. Therefore, the success of LC is in no way evidence that deep learning or other such methods will suffice for high-level reasoning in real-world situations. \subsection*{References} Lample, Guillaume and Fran\c{c}ois Charton, 2019. ``Deep Learning for Symbolic Mathematics'' {\em NeurIPS-2019.} https://arxiv.org/abs/1912.01412 Richardson, Daniel, 1969. ``Some undecidable problems involving elementary functions of a real variable." {\em The Journal of Symbolic Logic,} {\bf 33}:4, 514-520. WolframAlpha, ``Integrals that Can and Cannot be Done'' \\ https://reference.wolfram.com/language/tutorial/IntegralsThatCanAndCannotBeDone.html WolframAlpha, ``Symbolic Differential Equation Solving'' \\ https://reference.wolfram.com/language/tutorial/DSolveOverview.html \end{document}
2,640
6,120
en
train
0.74.0
{\texttt{b}}egin{document} \title{Finite and symmetrized colored multiple zeta values} {\texttt{b}}egin{abstract} Colored multiple zeta values are special values of multiple polylogarithms evaluated at $N$th roots of unity. In this paper, we define both the finite and the symmetrized versions of these values and show that they both satisfy the double shuffle relations. Further, we provide strong evidence for an isomorphism connecting the two spaces generated by these two kinds of values. This is a generalization of a recent work of Kaneko and Zagier on finite and symmetrized multiple zeta values and of the second author on finite and symmetrized Euler sums. \end{abstract} \tableofcontents \section{Introduction} \label{sec:Intro} Let $\gG_N$ be the group of $N$th roots of unity. For ${\texttt{b}}fs:=(s_1,\dots,s_d) \in \mathbb{N}^d$ and ${\texttt{b}}feta:=(\eta_1,\dots,\eta_d)\in (\gG_N)^d$, we define \emph{colored multiple zeta values} (CMZVs) by {\texttt{b}}egin{align}\label{def:CMZV} \zeta\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}:=\sum_{k_1>k_2>{\texttt{c}}dots>k_d>0} \frac{\eta_1^{k_1}{\texttt{c}}dots\eta_d^{k_d}}{k_1^{s_1}{\texttt{c}}dots k_d^{s_d}}. \end{align} We call $d$ the \emph{depth} and $|{\texttt{b}}fs|:=s_1+\dots+s_d$ the \emph{weight}. These objects were first systematically studied by Deligne, Goncharov, Racinet, Arakawa and Kaneko {\texttt{c}}ite{Arakawa99,Deligne05,Goncharov01,Racinet02}. It is not hard to see that the series in \eqref{def:CMZV} diverge if and only if $(s_1,\eta_1)=(1,1)$. By multiplying these series we get the so called stuffle (or quasi-shuffle) relations. Additionally, it turns out that these values can also be expressed by iterated integrals which lead to the shuffle relations. Note that for $N=1$ we rediscover \emph{multiple zeta values} (MZVs) {\texttt{b}}egin{align}\label{def:MZV} \zeta({\texttt{b}}fs):= \sum_{k_1>k_2>{\texttt{c}}dots>k_d>0} \frac{1}{k_1^{s_1}{\texttt{c}}dots k_d^{s_d}} = \zeta\lrp{{\texttt{b}}fs}{\{1\}^d}, \end{align} where $\{s\}^d:=(s,\ldots,s)\in \mathbb{N}^d$. When $N=2$ the colored multiple zeta values are usually called \emph{Euler sums}. In {\texttt{c}}ite{Ihara06} Ihara, Kaneko and Zagier defined the regularized MZVs in two different ways and then obtained the regularized double shuffle relations. Racinet {\texttt{c}}ite{Racinet02} and Arakawa and Kaneko {\texttt{c}}ite{Arakawa04} further generalized these regularized values to arbitrary levels, which we denote by $\displaystyle \zeta_{\texttt{a}}st\lrpT{{\texttt{b}}fs}{{\texttt{b}}feta}$ and $\displaystyle\zeta_\sha\lrpT{{\texttt{b}}fs}{{\texttt{b}}feta}$, which are in general polynomials of $T$. Using these values, we define the \emph{symmetrized colored multiple zeta values} (SCVs) of level $N$ by {\texttt{b}}egin{align} \zeta_{\texttt{a}}st^\Sy\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}:=&\, \sum_{j=0}^d (-1)^{s_1+{\texttt{c}}dots+s_j} \ol{\eta_1}{\texttt{c}}dots\ol{\eta_j}\,\zeta_{\texttt{a}}st\lrpT{s_j,\ldots,s_1}{\ol{\eta_j},\ldots,\ol{\eta_1}} \zeta_{\texttt{a}}st\lrpT{s_{j+1},\dots,s_d}{\eta_{j+1},\dots,\eta_d}, \label{equ:astSCV}\\ \zeta_\sha^\Sy\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}:=&\, \sum_{j=0}^d (-1)^{s_1+\dots+s_j} \ol{\eta_1}{\texttt{c}}dots\ol{\eta_j}\,\zeta_\sha\lrpT{s_j,\dots,s_1}{\ol{\eta_j},\dots,\ol{\eta_1}} \zeta_\sha\lrpT{s_{j+1},\dots,s_d}{\eta_{j+1},\dots,\eta_d} \label{equ:shaSCV} \end{align} for all ${\texttt{b}}fs\in \mathbb{N}^d$ and ${\texttt{b}}feta \in (\gG_N)^d$. This definition includes as special cases the \emph{symmetrized MZVs} (when $N=1$) introduced by Kaneko and Zagier (see also Jarossay {\texttt{c}}ite{Jarossay14}) and the \emph{symmetrized Euler sums} (when $N=2$) established in {\texttt{c}}ite{Zhao15} by the second author. We will show that SCVs are actually independent of $T$ (Proposition \ref{prop:SCVconst}) and the two versions are essentially the same modulo $\zeta(2)$ (Theorem \ref{theo:moduloz2}). Furthermore, we prove that they satisfy both the stuffle relations (Theorem \ref{thm:astSCVmorphism}) and the shuffle relations (Theorem \ref{thm:shuffleSCV}) by using two different Hopf algebra structures, respectively. Let ${\texttt{c}}alP$ be the set of rational primes and $\F_p$ the finite field of $p$ elements. Set {\texttt{b}}egin{align*} {\texttt{c}}alP(N):=\{p\in {\texttt{c}}alP {\texttt{c}}olon p\equiv -1 \pmod{N} \}, \end{align*} which is of infinite cardinality with density $1/\varphi(N)$ by Chebotarev's Density Theorem {\texttt{c}}ite{Tschebotareff26}, where $\varphi(N)$ is Euler's totient function. Further, we define {\texttt{b}}egin{equation*} \F_p[\xi_N]:=\frac{\F_p[X]}{(X^N-1)}= \left\{ \sum_{j=0}^{N-1} c_j \xi_N^j{\texttt{c}}olon c_0,\dots,c_{N-1}\in \F_p\right\}, \end{equation*} where $\xi_N:=\xi_{N,p}$ is a fixed primitive root of $X^N-1\in\F_p[X]$. Moreover, we denote {\texttt{b}}egin{equation*} {\texttt{c}}alA(N):=\prod_{p\in{\texttt{c}}alP(N)} \F_p[\xi_N] {\texttt{b}}igg/{\texttt{b}}igoplus_{p\in{\texttt{c}}alP(N)} \F_p[\xi_N]. \end{equation*} We usually remove the dependence of $\xi_N$ on $p$ by abuse of notation. For convenience, we also identify $\prod_{p\in{\texttt{c}}alP(N),p>k} \F_p[\xi_N] {\texttt{b}}igg/{\texttt{b}}igoplus_{p\in{\texttt{c}}alP(N),p>k} \F_p[\xi_N]$ with ${\texttt{c}}alA(N)$ by setting the components $a_p=0$ for all $p\le k$. Now we define the \emph{finite colored multiple zeta values} (FCVs) of level $N$ by {\texttt{b}}egin{equation}\label{equ:defnFCV} \zeta_{{\texttt{c}}alA(N)} \lrp{{\texttt{b}}fs}{{\texttt{b}}feta}:=\left( \sum_{p>k_1>k_2>{\texttt{c}}dots>k_d>0} \frac{\eta_1^{k_1}{\texttt{c}}dots\eta_d^{k_d}}{k_1^{s_1}{\texttt{c}}dots k_d^{s_d}}\right)_{p\in{\texttt{c}}alP(N)} \in {\texttt{c}}alA(N) \end{equation} for all ${\texttt{b}}fs\in\N^d$ and ${\texttt{b}}feta\in(\gG_N)^d$. Again, this definition includes as special cases the \emph{finite MZVs} (when $N=1$) and the \emph{finite Euler sums} (when $N=2$). {\texttt{b}}egin{rem}\label{rem:subtleDefn} In fact, we have abused the notation in \eqref{equ:defnFCV}. All $\eta_j=\eta_{j,p}$ depend on $p$ so that by a fixed choice ${\texttt{b}}feta\in (\gG_N)^d$ we really mean a fixed choice of $(e_1,\dots,e_d)\in(\Z/N\Z)^d$ independent of $p$ such that $\eta_{j,p}:=\xi_{N,p}^{e_j}$ for all $p$. \end{rem} Similar to SCVs, we shall show that FCVs satisfy both the stuffle and the shuffle relations in Theorem \ref{thm:stuffleFCV} and Theorem \ref{thm:shuffleFCV}, respectively. The primary motivation for this paper is the following conjectural relation between SCVs and FCVs. Let $\CMZV_{w,N}$ (resp.\ $\FCV_{w,N}$, resp.\ $\SCV_{w,N}$) be the $\Q(\gG_N)$-space generated by CMZVs (resp.\ FCVs, resp.\ SCVs) of weight $w$ and level $N$. Set $\nSCV_{0,N}:=\Q(\gG_N)$ and define $\nSCV_{w,N}:=\SCV_{w,N}+2\pi i\, \SCV_{w-1,N}$ for all $w\ge 1$. {\texttt{b}}egin{conj}\label{conj:Main} Let $N\ge 3$ and weight $w\ge 1$. Then: {\texttt{b}}egin{itemize} \item [\upshape{(i)}] $\nSCV_{w,N}=\CMZV_{w,N}$ as vector spaces over $\Q(\xi_N)$. \item [\upshape{(ii)}] We have a $\Q(\gG_N)$-algebra isomorphism {\texttt{b}}egin{align*} f{\texttt{c}}olon \FCV_{w,N} &\, \overset{\sim}{\lra} \frac{\CMZV_{w,N}}{2\pi i\, \CMZV_{w-1,N}} \\ \zeta_{{\texttt{c}}alA(N)}\lrp{{\texttt{b}}fs}{{\texttt{b}}feta} &\,\lmaps \ \zeta_{\sha}^\Sy\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}. \end{align*} \end{itemize} \end{conj} It follows from this conjecture that {\texttt{b}}egin{align*} \frac{\CMZV_{w,N}}{2\pi i\, \CMZV_{w-1,N}} {\texttt{c}}ong \frac{\nSCV_{w,N}}{2\pi i\, \nSCV_{w-1,N}} {\texttt{c}}ong \frac{\SCV_{w,N}}{{\texttt{b}}ig(2\pi i\, \SCV_{w-1,N}+\zeta(2)\SCV_{w-2,N}{\texttt{b}}ig){\texttt{c}}ap \SCV_{w,N}}. \end{align*} So the map $f$ in Conjecture~\ref{conj:Main} (ii) is surjective. The analogous result for MZVs at level $1$ corresponding to Conjecture~\ref{conj:Main} (i) has been proved by Yasuda {\texttt{c}}ite{Yasuda2014}. For numerical examples supporting Conjecture~\ref{conj:Main} at level 3 and 4, see Section \ref{sec:numex}. This conjecture should be regarded as a generalization of the corresponding conjectures of Kaneko and Zagier for the MZVs and of the second author for the Euler sums {\texttt{c}}ite{Zhao15}, where $2\pi i$ is replaced by $\zeta(2)$ since the MZVs and Euler sums are all real numbers. We are sure the final proof of Conjecture~\ref{conj:Main} will need the $p$-adic version of the generalized Drinfeld associators whose coefficients should satisfy the same algebraic relations as those of CMZVs plus the equation ``$2\pi i=0$'' when level $N\ge 3$. {\texttt{b}}igskip {{\texttt{b}}f{Acknowledgements.}} The authors would like to thank the ICMAT at Madrid, Spain, for its warm hospitality and gratefully acknowledge the support by the Severo Ochoa Excellence Program. {\texttt{b}}igskip
3,450
25,576
en
train
0.74.1
\section{Algebraic framework} \label{sec:algframe} The study of MZVs with word algebras was initiated by Hoffman {\texttt{c}}ite{Hoffman97} and generalized by Racinet {\texttt{c}}ite{Racinet02} to deal with colored MZVs, which we now review briefly. Fix a positive integer $N$ as the level. Define the alphabet $X_N:=\{x_\eta{\texttt{c}}olon \eta \in \gG_N{\texttt{c}}up \{0\}\}$ and let $X_N^{\texttt{a}}st$ be the set of words over $X_N$ including the empty word ${\texttt{b}}e$. Denoted by $\fA_N$ the free noncommutative polynomial algebra in $X_N$, i.e., the algebra of words on $X_N^{\texttt{a}}st$. The \emph{weight} of a word ${\texttt{b}}fw \in \fA_N$, denoted by $|{\texttt{b}}fw|$, is the number of letters contained in ${\texttt{b}}fw$, and its \emph{depth}, denoted by $\dep({\texttt{b}}fw)$, is the number of letters $x_\eta$ ($\eta\in\gG_N$) contained in ${\texttt{b}}fw$. Further, let $\fA_N^1$ denote the subalgebra of $\fA_N$ consisting of words not ending with $x_0$. Hence, $\fA_N^1$ is generated by words of the form $y_{m,\mu}:=x_0^{m-1}x_\mu$ for $m\in \mathbb N$ and $\mu \in \gG_N$. Define the alphabet $Y_N=\{y_{k,\mu}: k\in \mathbb N,\mu \in \gG_N\}$ and $Y_N^{\texttt{a}}st$ is the set of words (including the empty word) over $Y_N$. Additionally, let $\fA_N^0$ denote the subalgebra of $\fA_N^1$ with words not beginning with $x_1$ and not ending with $x_0$. The words in $\fA_N^0$ are called \emph{admissible words.} We now equip $\fA_N^1$ with a Hopf algebra structure $(\fA_N^1,{\texttt{a}}st,\widetilde{\Delta}_{\texttt{a}}st)$. The \emph{stuffle product} ${\texttt{a}}st {\texttt{c}}olon \fA_N^1 \otimes \fA_N^1 \to \fA_N^1$ is defined as follows: {\texttt{b}}egin{enumerate}[(ST1)] \item ${\texttt{b}}e {\texttt{a}}st w := w {\texttt{a}}st {\texttt{b}}e := w$, \item $y_{m,\mu}u {\texttt{a}}st y_{n,\nu}v :=y_{m,\mu}(u {\texttt{a}}st y_{n,\nu}v)+y_{n,\nu}(y_{m,\mu}u {\texttt{a}}st v) + y_{m+n,\mu\nu}(u{\texttt{a}}st v)$, \end{enumerate} for any word $u,v,w\in \fA_N^1$, $m,n\in \mathbb N$ and $\mu,\nu\in \gG_N$. Then linearly extend it to $\fA_N^1$. The coproduct $\widetilde{\Delta}_{\texttt{a}}st {\texttt{c}}olon \fA_N^1 \to \fA_N^1\otimes \fA_N^1$ is defined by deconcatenation: {\texttt{b}}egin{align*} \widetilde{\Delta}_{\texttt{a}}st(y_{s_1,\eta_1}{\texttt{c}}dots y_{s_d,\eta_d}):= \sum_{j=0}^d y_{s_1,\eta_1}{\texttt{c}}dots y_{s_j,\eta_j}\otimes y_{s_{j+1},\eta_{j+1}} {\texttt{c}}dots y_{s_d,\eta_d}. \end{align*} Note that $(\fA_N^0,{\texttt{a}}st)$ is a sub-algebra (but not a sub-Hopf algebra). We also need another Hopf algebra structure $(\fA_N,\sha, \widetilde{\Delta}_\sha)$ which will provide the shuffle relations. Here, the \emph{shuffle product} $\sha{\texttt{c}}olon \fA_N\otimes \fA_N\to \fA_N$ is defined as follows: {\texttt{b}}egin{enumerate}[(SH1)] \item ${\texttt{b}}e \sha w := w \sha {\texttt{b}}e := w$, \item $au \sha b v := a(u\sha bv) + b(au\sha v)$, \end{enumerate} for any word $u,v,w \in \fA_N$ and $a,b\in X_N$. Then linearly extend it to $\fA_N$. The coproduct $\widetilde{\Delta}_\sha {\texttt{c}}olon \fA_N \to \fA_N\otimes \fA_N$ is defined (again) by deconcatenation: {\texttt{b}}egin{align*} \widetilde{\Delta}_\sha (x_{\eta_1}{\texttt{c}}dots x_{\eta_d}):= \sum_{j=0}^d x_{\eta_1}{\texttt{c}}dots x_{\eta_j}\otimes x_{\eta_{j+1}}{\texttt{c}}dots x_{\eta_{d}}, \end{align*} where $\eta_1,\ldots,\eta_d\in \gG_N{\texttt{c}}up \{0\}$. Note that both $(\fA_N^1,\sha)$ and $(\fA_N^0,\sha)$ are sub-algebras (but not as sub-Hopf algebras). Finally, we remark that both $(\fA_N^1,{\texttt{a}}st)$ and $(\fA_N,\sha)$ are commutative and associative algebras.
1,497
25,576
en
train
0.74.2
\section{Finite colored multiple zeta values} We recall that finite MZVs and finite Euler sums are elements of the $\Q$-ring {\texttt{b}}egin{align*} {\texttt{c}}alA:=\prod_{p\in{\texttt{c}}alP} \F_p {\texttt{b}}igg/{\texttt{b}}igoplus_{p\in{\texttt{c}}alP} \F_p. \end{align*} Note that for $N=1,2$, ${\texttt{c}}alA$ can be identified with ${\texttt{c}}alA(N)$ since all primes greater than 2 are odd and we can safely disregard the prime $p=2$. By our choice of the primes in ${\texttt{c}}alP(N)$ it follows immediately from Fermat's Little Theorem that the Frobenius endomorphism ($p$-power map at prime $p$-component) is given by the \emph{conjugation} $(\ol{a_p})_p\in {\texttt{c}}alA(N)$, where {\texttt{b}}egin{align}\label{equ:pPower=inv} \ol{\sum_{j=0}^{N-1} c_j \xi_N^j}:=\sum_{j=0}^{N-1} c_j \xi_N^{N-j}, \quad \quad (c_j\in \F_p). \end{align} The following lemma implies that ${\texttt{c}}alA(N)$ is in fact a $\Q(\gG_N)$-vector space. {\texttt{b}}egin{lem} The field $\Q(\gG_N)$ can be embedded into ${\texttt{c}}alA(N)$ diagonally. \end{lem} {\texttt{b}}egin{proof} The map $\phi{\texttt{c}}olon \mathbb Q \to {\texttt{c}}alA(N)$, $r\mapsto (\phi_p(r))_{p\in {\texttt{c}}alP(N)}$ given by $\phi(0)=(0)_p$ and {\texttt{b}}egin{align*} \phi_p(r):={\texttt{b}}egin{cases} r \pmod p & \text{~if~} \ord_p(r)\geq 0, \\ 0 & \text{~otherwise}, \end{cases} \end{align*} embeds $\mathbb Q$ diagonally in ${\texttt{c}}alA(N)$ due to the fundamental theorem of arithmetic. The cyclotomic field $\mathbb Q(\gG_N)$ is given by {\texttt{b}}egin{align*} \mathbb Q(\gG_N):=\left\{\sum_{j=0}^{N-1}a_j\xi_N^j: a_j\in \mathbb Q \right\}, \end{align*} where $\xi_N\in\gG_N $ is a primitive element. Therefore $\varphi{\texttt{c}}olon \Q(\gG_N) \to {\texttt{c}}alA(N)$ defined by {\texttt{b}}egin{align*} \sum_{j=0}^{N-1}a_j\xi_N^j \longmapsto \left(\sum_{j=0}^{N-1} \phi_p(a_j)\xi_N^j \right)_{p\in {\texttt{c}}alP(N)} \end{align*} is an embedding. \end{proof} In this section, we study $\mathbb Q(\gG_N)$-linear relations among FCVs by developing a double shuffle picture. First, similar to MZVs, the stuffle product is simply induced by the defining series of FCVs \eqref{equ:defnFCV}. For example, we have {\texttt{b}}egin{align*} \sum_{p>k>0}\frac{{\texttt{a}}lpha^k}{k^a} \sum_{p>l>01}\frac{{\texttt{b}}eta^l}{l^b} = \sum_{p>k>l>0}\frac{{\texttt{a}}lpha^k{\texttt{b}}eta ^l}{k^al^b} + \sum_{p>l>k>0}\frac{{\texttt{a}}lpha^k{\texttt{b}}eta ^l}{k^al^b} + \sum_{p>k>0}\frac{({\texttt{a}}lpha{\texttt{b}}eta)^k}{k^{a+b}} \end{align*} for $a,b\in \mathbb N$ and ${\texttt{a}}lpha,{\texttt{b}}eta \in \gG_N$. On the other hand, the shuffle product is more involved than that for the MZVs since apparently there is no integral representation for FCVs available. However, we will deduce the shuffle relations by using the integral representation of the single variable multiple polylogarithms. Define the $\Q(\gG_N)$-linear map $\zeta_{{\texttt{c}}alA(N),{\texttt{a}}st}{\texttt{c}}olon \fA_N^1\to{\texttt{c}}alA(N)$ by setting {\texttt{b}}egin{align*} \zeta_{{\texttt{c}}alA(N),{\texttt{a}}st}(w):=\zeta_{{\texttt{c}}alA(N)}\lrp{{\texttt{b}}fs}{{\texttt{b}}feta} \end{align*} for any word $\displaystyle w=W\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}:=y_{s_1,\eta_1} {\texttt{c}}dots y_{s_d,\eta_d} \in \fA_N^1$, where ${\texttt{b}}fs=(s_1,\ldots,s_d)\in \N^d$ and ${\texttt{b}}feta=(\eta_1,\dots,\eta_d)\in(\gG_N)^d$. {\texttt{b}}egin{thm}\label{thm:stuffleFCV} The map $\zeta_{{\texttt{c}}alA(N),{\texttt{a}}st}{\texttt{c}}olon (\fA_N^1,{\texttt{a}}st)\to{\texttt{c}}alA(N)$ is an algebra homomorphism. \end{thm} {\texttt{b}}egin{proof} Let $u,v\in \fA_N^1$. It is easily seen by induction on $|u|+|v|$ that {\texttt{b}}egin{align*} \zeta_{{\texttt{c}}alA(N),{\texttt{a}}st}(u{\texttt{a}}st v)= \zeta_{{\texttt{c}}alA(N),{\texttt{a}}st}(u)\zeta_{{\texttt{c}}alA(N),{\texttt{a}}st}(v), \end{align*} which concludes the proof. \end{proof} We now define a map ${\texttt{b}}fp{\texttt{c}}olon \fA_N^1\to \fA_N^1$ by setting {\texttt{b}}egin{align*} {\texttt{b}}fp(y_{s_1,\eta_1}y_{s_2,\eta_2}{\texttt{c}}dots y_{s_d,\eta_d})= y_{s_1,\eta_1}y_{s_2,\eta_1\eta_2}{\texttt{c}}dots y_{s_d,\eta_1\eta_2{\texttt{c}}dots \eta_d} \end{align*} and its inverse ${\texttt{b}}fq{\texttt{c}}olon \fA_N^1\to \fA_N^1$ by setting {\texttt{b}}egin{align*} {\texttt{b}}fq(y_{s_1,\eta_1}y_{s_2,\eta_2}{\texttt{c}}dots y_{s_d,\eta_d})= y_{s_1,\eta_1}y_{s_2,\eta_2 \eta_1^{-1}}{\texttt{c}}dots y_{s_d,\eta_d\eta_{d-1}^{-1}}. \end{align*} Further, set the map $\tau{\texttt{c}}olon \fA_1^1 \to \fA_1^1$ by defining $\tau({\texttt{b}}e):={\texttt{b}}e$ and {\texttt{b}}egin{align*} \tau(x_0^{s_1-1}x_1{\texttt{c}}dots x_0^{s_d-1}x_1) := (-1)^{s_1+{\texttt{c}}dots+s_d} x_0^{s_d-1}x_1{\texttt{c}}dots x_0^{s_1-1}x_1 \end{align*} and extended to $\fA_1^1$ by linearity. The next theorem can be proved in the same manner as that for {\texttt{c}}ite[Thm.~3.11]{Zhao15}. In order to be self-contained, we present its complete proof. For any $s,s_1,\dots,s_d\in\N$ and $\xi,\xi_1,\dots,\xi_d\in\gG_N$, we define the function with complex variable $z$ with $|z|<1$ by {\texttt{b}}egin{align*} \zeta_\sha(y_{s,\xi} y_{s_1,\xi_1} \dots y_{s_d,\xi_d};z) :=& \Li_{s,s_1,\dots,s_d}(z\xi,\xi_1/\xi,\xi_2/\xi_1,\dots,\xi_d/\xi_{d-1}) \\ =&\, \int_{[0,z]} \left(\frac{dt}{t}\right)^{s-1} \frac{dt}{\xi^{-1}-t} \left(\frac{dt}{t}\right)^{s_1-1} \frac{dt}{\xi_1^{-1}-t} {\texttt{c}}dots \left(\frac{dt}{t}\right)^{s_d-1} \frac{dt}{\xi_d^{-1}-t}, \end{align*} where $[0,z]$ is the straight line from 0 to $z$. It is clear that $\zeta_\sha(-;z)$ is well-defined. By the shuffle relation of iterated integrals, we get {\texttt{b}}egin{align}\label{equ:zetaShaz} \zeta_\sha(u;z)\zeta_\sha(v;z)=\zeta_\sha(u\sha v;z) \end{align} for all $u,v\in Y_N^{\texttt{a}}st$. {\texttt{b}}egin{thm}\label{thm:shuffleFCV} Let $N\ge 1$. Define the map $\zeta_{{\texttt{c}}alA(N),\sha}:= \zeta_{{\texttt{c}}alA(N),{\texttt{a}}st} {\texttt{c}}irc {\texttt{b}}fq{\texttt{c}}olon \fA_N^1\to{\texttt{c}}alA(N)$. Then we have {\texttt{b}}egin{align*} \zeta_{{\texttt{c}}alA(N),\sha}(u \sha v)=\zeta_{{\texttt{c}}alA(N),\sha}(\tau(u)v) \end{align*} for any $u\in \fA_1^1$ and $v\in \fA_N^1$. These relations are called linear shuffle relations for the FCVs. \end{thm} {\texttt{b}}egin{proof} Obviously, it suffices to prove {\texttt{b}}egin{align*} \zeta_{{\texttt{c}}alA(N),\sha}((x_0^{s-1}x_1 u) \sha v) =(-1)^s \zeta_{{\texttt{c}}alA(N),\sha}(u \sha (x_0^{s-1}x_1v)) \end{align*} for $s\in \mathbb N$, $u\in \fA_1^1$ and $v\in \fA_N^1$. For simplicity we use the notation $a:=x_0$ and $b:=x_1$ in the rest of this proof. Let $w$ be a word in $\fA_N^1$, i.e., there exist ${\texttt{b}}fs \in \mathbb{N}^d$ and ${\texttt{b}}fxi\in (\gG_N)^d$ such that $w=y_{s_1,\xi_1} \dots y_{s_d,\xi_d}$. Then there exists ${\texttt{b}}feta\in (\gG_N)^d$, such that $\displaystyle {\texttt{b}}fq(w)=W\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}$. For any prime $p>2$, the coefficient of $z^p$ in $\zeta_\sha(bw;z)$ is given by {\texttt{b}}egin{equation*} {\rm Coeff}_{z^p}\Big[\zeta_\sha\lrp{1,s_1,\dots,s_d}{z,\xi_1,\dots,\xi_d} \Big]= \frac{1}{p}\sum_{p>k_1>{\texttt{c}}dots>k_d>0} \frac{\eta_1^{k_1}{\texttt{c}}dots \eta_d^{k_d}}{k_1^{s_1}{\texttt{c}}dots k_d^{s_d}} =\frac{1}{p} H_{p-1}({\texttt{b}}fq(w)), \end{equation*} where {\texttt{b}}egin{align*} H_k(y_{s_1,\eta_1} \dots y_{s_d,\eta_d}):=\sum_{k\geq k_1>{\texttt{c}}dots>k_d>0} \frac{\eta_1^{k_1}{\texttt{c}}dots \eta_d^{k_d}}{k_1^{s_1}{\texttt{c}}dots k_d^{s_d}}. \end{align*} Observe that {\texttt{b}}egin{align}\label{eq:shdecomp} b \Big( (a^{s-1}b u) \sha v -(-1)^s u \sha (a^{s-1}b v)\Big) =\sum_{j=0}^{s-1} (-1)^{j} (a^{s-1-j}b u) \sha (a^j b v). \end{align} By applying $\zeta_\sha(-;z)$ to \eqref{eq:shdecomp} and then extracting the coefficients of $z^p$ from both sides we obtain {\texttt{b}}egin{align*} &\, \frac{1}{p}\Big( H_{p-1}{\texttt{c}}irc{\texttt{b}}fq{\texttt{b}}ig((a^{s-1}b u) \sha v{\texttt{b}}ig) -(-1)^s H_{p-1}{\texttt{c}}irc{\texttt{b}}fq{\texttt{b}}ig(u \sha (a^{s-1}b v){\texttt{b}}ig)\Big)\\ =&\,\sum_{j=0}^{s-1} (-1)^{j} {\rm Coeff}_{z^p} {\texttt{b}}ig[\zeta_\sha(a^{s-1-j}b u;z)\zeta_\sha(a^{j}b v;z){\texttt{b}}ig] \\ =&\,\sum_{j=0}^{s-1} (-1)^{j}\sum_{j=1}^{p-1} {\rm Coeff}_{z^j}{\texttt{b}}ig[\zeta_\sha(a^{s-1-j}b u;z) {\texttt{b}}ig] {\rm Coeff}_{z^{p-j}}{\texttt{b}}ig[\zeta_\sha(a^{j}b v;z){\texttt{b}}ig] \end{align*} by \eqref{equ:zetaShaz}. Since $p-j<p$ and $j<p$ the last sum is $p$-integral. Therefore we get {\texttt{b}}egin{equation*} H_{p-1}{\texttt{c}}irc{\texttt{b}}fq((a^{s-1}b u) \sha v)\equiv (-1)^s H_{p-1}{\texttt{c}}irc{\texttt{b}}fq(u \sha (a^{s-1}b v)) \pmod{p} \end{equation*} which completes the proof of the theorem. \end{proof}
4,003
25,576
en
train
0.74.3
\section{Symmetrized colored multiple zeta values} \subsection{Regularizations of colored MZVs} Since the regularized colored MZVs (recalled in Theorem \ref{theo:Racinet} below) are polynomials of $T$, both of the two kinds of SCVs defined by \eqref{equ:astSCV} and \eqref{equ:shaSCV} are \emph{a priori} also polynomials of $T$. We show first that they are in fact constant complex numbers. To this end, we define two maps $\zeta_{\texttt{a}}st,\zeta_\sha {\texttt{c}}olon \fA_N^0 \to \CC$ such that for any word $\displaystyle w=W\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}\in \fA_N^0$ {\texttt{b}}egin{align*} \zeta_{\texttt{a}}st(w):=\zeta\lrp{{\texttt{b}}fs}{{\texttt{b}}feta} \quad \quad\text{~and~}\quad \quad \zeta_\sha(w):=\zeta_\sha \Lrp{s_1,s_2,\ldots,\ \, s_d\ \, }{\eta_1,\frac{\eta_{2}}{\eta_1},\ldots, \frac{\eta_{d}}{\eta_{d-1}}}. \end{align*} {\texttt{b}}egin{lem}[{\texttt{c}}ite{Arakawa04,Racinet02}] The maps $\zeta_{\texttt{a}}st {\texttt{c}}olon (\fA_N^0,{\texttt{a}}st) \to \CC$ and $\zeta_\sha {\texttt{c}}olon (\fA_N^0,\sha) \to \CC$ are algebra homomorphisms. \end{lem} This first preliminary result is well-known. The map $\zeta_{\texttt{a}}st$ originates from the series definition \eqref{def:CMZV} while the map $\zeta_\sha$ comes from the integral representation by setting $z=1$ in \eqref{equ:zetaShaz}. The following results of Racinet {\texttt{c}}ite{Racinet02} addressing the regularization of colored MZVs generalize those for the MZVs first discovered by Ihara, Kaneko and Zagier {\texttt{c}}ite{Ihara06}. {\texttt{b}}egin{thm}\label{theo:Racinet} Let $N$ be a positive integer. {\texttt{b}}egin{enumerate}[(i)] \item The algebra homomorphism $\zeta_{\texttt{a}}st {\texttt{c}}olon (\fA_N^0,{\texttt{a}}st) \to \CC$ can be extended to a homomorphism $\zeta_{{\texttt{a}}st}(-;T){\texttt{c}}olon(\fA_N^1,{\texttt{a}}st)\to\CC[T]$, where $\zeta_{{\texttt{a}}st}(y_{1,1};T)=T$. \item The algebra homomorphism $\zeta_\sha {\texttt{c}}olon (\fA_N^0,\sha) \to \CC$ can be extended to a homomorphism $\zeta_{\sha}(-;T){\texttt{c}}olon(\fA_N^1,\sha)\to\CC[T]$, where $\zeta_{\sha}(x_1;T)=T$. \item For all $w\in\fA_N^1$, we have {\texttt{b}}egin{align*} \zeta_{\sha}{\texttt{b}}ig({\texttt{b}}fp(w);T{\texttt{b}}ig)=\rho {\texttt{b}}ig(\zeta_{{\texttt{a}}st}(w;T){\texttt{b}}ig), \end{align*} where $\rho{\texttt{c}}olon \CC[T]\to \CC[T]$ is a $\CC$-linear map such that {\texttt{b}}egin{align*} \rho(e^{Tu})=\exp\left(\sum_{n=2}^\infty \frac{(-1)^n}{n}\zeta(n)u^n\right)e^{Tu} \end{align*} for all $|u|<1$. \end{enumerate} \end{thm} In the above theorem, to signify the fact that the images of $\zeta_{\sha}$ and $\zeta_{{\texttt{a}}st}$ are polynomials of $T$, we have used the notation $\zeta_{\sha}(w;T)$ and $\zeta_{{\texttt{a}}st}(w;T)$. {\texttt{b}}egin{prop}\label{prop:SCVconst} For all ${\texttt{b}}fs\in \N^d$ and ${\texttt{b}}feta\in (\gG_N)^d$, both $\displaystyle \zeta_{\texttt{a}}st^\Sy\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}$ and $\displaystyle \zeta_\sha^\Sy\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}$ are constant complex values, i.e., they are independent of $T$. \end{prop} {\texttt{b}}egin{proof} We start with the shuffle version $\displaystyle \zeta_\sha^\Sy\lrp{s_1,\ldots, s_d}{\eta_1,\ldots,\eta_d}$. If $(s_j,\eta_j)\ne (1,1)$ for all $j=1,\ldots,d$ then clearly it is finite. In the case that $(s_j,\eta_j)=(1,1)$ for all $j=1,\ldots,d$ then the binomial theorem implies {\texttt{b}}egin{equation*} \zeta_\sha^\Sy\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}=\sum_{j=0}^d (-1)^{d-j} \frac{T^j}{j!}\frac{T^{d-j}}{(d-j)!} =0. \end{equation*} Otherwise, we may assume for some $k$ and $l\ge 1$ we have $(s_k,\eta_k)\ne(1,1)$, $(s_{k+1},\eta_{k+1})={\texttt{c}}dots=(s_{k+l},\eta_{k+l})=(1,1)$, and $(s_{k+l+1},\eta_{k+l+1})\ne(1,1)$. We only need to show that {\texttt{b}}egin{equation*} \sum_{j=0}^l (-1)^{j} \zeta_\sha\lrpT{s_{k+j},\dots,s_1}{\eta_{k+j},\ldots,\eta_1} \zeta_\sha\lrpT{s_{k+j+1},\dots,s_d}{\eta_{k+j+1},\ldots,\eta_d} \end{equation*} is finite. Consider the words in $\fA_\sha^1$ corresponding to the above values. We may assume $x_\gl u=x_0^{s_k-1}x_{\eta_k} {\texttt{c}}dots x_0^{s_1-1}x_{\eta_1}$ (which is the empty word if $k=0$) and $x_\mu v=x_0^{s_{k+l+1}-1}x_{\eta_{k+l+1}} {\texttt{c}}dots x_0^{s_d-1}x_{\eta_d}$ (which is the empty word if $k+l=d$), where $\gl,\mu\ne 1$. So we only need to show that {\texttt{b}}egin{equation}\label{equ:symmtrizeWordsAdm} \sum_{j=0}^l (-1)^j (x_1^jx_\gl u)\sha(x_1^{l-j} x_\mu v) \in \fA_N^0. \end{equation} Let $q_0=0$ and $q_m=1$ for all $m\ne 0$. Then we have {\texttt{b}}egin{align*} &\,\sum_{j=0}^l (-1)^j (x_1^jx_\gl u) \sha (x_1^{l-j} x_\mu v)\\ =&\, q_k x_\gl(u\sha x_1^l x_\mu v) +\sum_{j=1}^l (-1)^{j}x_1 (x_1^{j-1} x_\gl u\sha x_1^{l-j} x_\mu v)\\ +&\, q_{k+l-d}(-1)^l x_\mu(x_1^l x_\gl u \sha v) + \sum_{j=0}^{l-1} (-1)^j x_1 (x_1^j x_\gl u\sha x_1^{l-j-1}x_\mu v) \\ =&\,q_k x_\gl (u\sha x_1^l x_\mu v)+q_{k+l-d}(-1)^l x_\mu( x_1^l x_\gl u \sha v)\in \fA_N^0, \end{align*} where we have used the substitution $j\to j+1$ in the first sigma summation which is canceled by the second sigma summation. For the stuffle version $ \zeta_{\texttt{a}}st^\Sy\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}$ we can use the same idea because the stuffing parts, i.e., the contraction of two beginning letters, always produce admissible words. We leave the details to the interested reader. \end{proof}
2,316
25,576
en
train
0.74.4
\subsection{Generating series} Using the algebraic framework of Section \ref{sec:algframe}, we define for ${\texttt{b}}fs:=(s_1,\ldots,s_d)\in \N^d$ and ${\texttt{b}}feta:=(\eta_1,\ldots,\eta_d)\in (\gG_N)^d$ the following maps: {\texttt{b}}egin{align*} \zeta_{\texttt{a}}st^\Sy &{\texttt{c}}olon \fA_N^1 \to \CC, \quad \quad \zeta_{\texttt{a}}st^\Sy(y_{s_1,\eta_1}{\texttt{c}}dots y_{s_d,\eta_d}):=\zeta_{\texttt{a}}st^\Sy\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}, \\ \zeta_{\sha}^\Sy &{\texttt{c}}olon \fA_N^1 \to \CC, \quad \quad \zeta_\sha^\Sy(y_{s_1,\eta_1}{\texttt{c}}dots y_{s_d,\eta_d}):=\zeta_\sha^\Sy\Lrp{s_1,s_2,\ldots,\ \, s_d\ \, }{\eta_1,\frac{\eta_{2}}{\eta_1},\ldots, \frac{\eta_{d}}{\eta_{d-1}}}. \end{align*} In order to study the SCVs effectively, we need to utilize their generating series, which should be associated with some dual objects of the Hopf algebras $(\fA_N^1,{\texttt{a}}st,\widetilde{\Delta}_{\texttt{a}}st)$ and $(\fA_N,\sha, \widetilde{\Delta}_\sha)$. So we denote by $\hfA_N$ the completion of $\fA_N$ with respect to the weight (and define $\hfA_N^1$ and $\hfA_N^0$ similarly). Let $R$ be any $\Q$-algebra. Define the coproduct $\gD_{\texttt{a}}st$ on $R\langle\!\langle Y_N^{\texttt{a}}st \rangle\!\rangle:=\hfA_N^1\otimes_\Q R$ by $\gD_{\texttt{a}}st({\texttt{b}}e):={\texttt{b}}e \otimes{\texttt{b}}e$ and {\texttt{b}}egin{align*} \gD_{\texttt{a}}st(y_{k,\xi}):={\texttt{b}}e \otimes y_{k,\xi}+y_{k,\xi}\otimes {\texttt{b}}e +\sum_{a+b=k,\,a,b\in\N {\texttt{a}}top \gl\eta=\xi,\,\gl,\eta\in\gG_N} y_{a,\gl}\otimes y_{b,\eta} \end{align*} for all $k\in\N$ and $\xi\in\gG_N$. Then extend it $R$-linearly. One can check easily that $(R\langle\!\langle Y_N^{\texttt{a}}st \rangle\!\rangle, {\texttt{a}}st,\gD_{\texttt{a}}st)$ is the dual to the Hopf algebra $(R\langle Y_N^{\texttt{a}}st \rangle, {\texttt{a}}st,\tilde{\gD}_{\texttt{a}}st)$. Take $R:=\CC[T]$. Let ${\mathbb P}si_{\texttt{a}}st^T$ be the generating series of ${\texttt{a}}st$-regularized colored MZVs, i.e., {\texttt{b}}egin{align*} {\mathbb P}si_{\texttt{a}}st^T:=\sum_{w\in Y_N^{\texttt{a}}st} \zeta_{\texttt{a}}st(w;T) w \in \CC[T]\langle\!\langle Y_N^{\texttt{a}}st \rangle\!\rangle, \end{align*} which can be regarded as an element in the regular ring of functions of $\CC[T]\langle Y_N^{\texttt{a}}st\rangle$ by the above consideration. Namely, we can define ${\mathbb P}si_{\texttt{a}}st^T[w]$ to be the coefficient of $w$ in ${\mathbb P}si_{\texttt{a}}st^T$. Further we set ${\mathbb P}si_{\texttt{a}}st:={\mathbb P}si_{\texttt{a}}st^0$. The shuffle version of ${\mathbb P}si_{\texttt{a}}st$ is more involved even though the basic idea is the same. First, for any $\Q$-algebra $R$ we can define the coproduct $\gD_\sha$ on $R\langle\!\langle X_N^{\texttt{a}}st \rangle\!\rangle:=\hfA_N\otimes_\Q R$ by {\texttt{b}}egin{align*} \gD_\sha({\texttt{b}}e):={\texttt{b}}e \otimes {\texttt{b}}e \quad \text{and} \quad \gD_\sha(x_\gl):=x_\gl\otimes{\texttt{b}}e+{\texttt{b}}e \otimes x_\gl \end{align*} for all $\gl\in\gG_N{\texttt{c}}up\{0\}$. Let $\eps_\sha$ be the counit such that $\eps_\sha({\texttt{b}}e)=1$ and $\eps_\sha(w)=0$ for all $w\ne{\texttt{b}}e$. Then the Hopf algebra $(R\langle\!\langle X_N^{\texttt{a}}st \rangle\!\rangle,\sha, \gD_\sha)$ is dual to $(R\langle X_N^{\texttt{a}}st \rangle,\sha,\tilde{\gD}_\sha)$. Now we can define {\texttt{b}}egin{align*} {\mathbb P}si_\sha^T:=\sum_{w\in Y_N^{\texttt{a}}st} \zeta_{\sha}({\texttt{b}}fp(w);T) w \in \CC[T]\langle\!\langle Y_N^{\texttt{a}}st \rangle\!\rangle, \end{align*} {\texttt{b}}egin{lem}\label{lem:group-likePsi} We have {\texttt{b}}egin{align*} {\mathbb P}si_{\texttt{a}}st^T=\exp(T y_{1,1}) {\mathbb P}si_{\texttt{a}}st \quad \text{and} \quad {\mathbb P}si_\sha^T=\exp(T y_{1,1}) {\mathbb P}si_\sha. \end{align*} \end{lem} {\texttt{b}}egin{proof} Set $\widehat{{\mathbb P}si}_\sha(T):=\sum_{w\in Y_N^{\texttt{a}}st} \zeta_{\sha}(w;T) w$. Then $\widehat{{\mathbb P}si}_\sha(T)$ is group-like for $\gD_\sha$. Further, ${\mathbb P}si_{\texttt{a}}st^T$ is group-like for $\gD_{\texttt{a}}st$. It follows from Cor.~2.4.4 and Cor.~2.4.5 of {\texttt{c}}ite{Racinet02} that {\texttt{b}}egin{align*} {\mathbb P}si_{\texttt{a}}st^T=\exp(T y_{1,1}) {\mathbb P}si_{\texttt{a}}st \quad \text{and} \quad \widehat{{\mathbb P}si}_\sha(T)=\exp(T y_{1,1}) \widehat{{\mathbb P}si}_\sha(0), \end{align*} since $x_1=y_{1,1}$. Since ${\texttt{b}}fq(y_{1,1}^n w)=y_{1,1}^n {\texttt{b}}fq(w)$ for all $n\in\Z_{\ge 0}$ and $w\in Y_N^{\texttt{a}}st$, applying ${\texttt{b}}fq$ to the second equality leads to {\texttt{b}}egin{align*} \sum_{w\in Y_N^{\texttt{a}}st} \zeta_{\sha}(w;T) {\texttt{b}}fq(w) =\exp(T y_{1,1}){\texttt{b}}igg( \sum_{w\in Y_N^{\texttt{a}}st} \zeta_{\sha}(w;0) {\texttt{b}}fq(w) {\texttt{b}}igg), \end{align*} which is equivalent to ${\mathbb P}si_\sha^T=\exp(T y_{1,1}) {\mathbb P}si_\sha$, as desired. \end{proof} For all $s_1,\dots,s_d\in\N$ and $\eta_1,\dots,\eta_d\in\gG_N$, we define the anti-automorphism $\inv{\texttt{c}}olon Y_N^{\texttt{a}}st \to Y_N^{\texttt{a}}st$ by {\texttt{b}}egin{equation*} \inv(y_{s_1,\eta_1} {\texttt{c}}dots y_{s_d,\eta_d}):= (-1)^{s_1+{\texttt{c}}dots+s_d}\, \ol{\eta_1}{\texttt{c}}dots \ol{\eta_d}\, y_{s_d,\eta_d}{\texttt{c}}dots y_{s_1,\eta_1} . \end{equation*} and then extend it to $\CC\langle\!\langle Y_N^{\texttt{a}}st \rangle\!\rangle$ by linearity. {\texttt{b}}egin{lem}\label{lem:invPsi} We have {\texttt{b}}egin{align*} \inv({\mathbb P}si_{\texttt{a}}st){\mathbb P}si_{\texttt{a}}st=\inv({\mathbb P}si_{\texttt{a}}st^T){\mathbb P}si_{\texttt{a}}st^T=&\,\sum_{w\in Y_N^{\texttt{a}}st} \zeta_{\texttt{a}}st^\Sy(w) w, \\ \inv({\mathbb P}si_\sha){\mathbb P}si_\sha=\inv({\mathbb P}si_\sha^T){\mathbb P}si_\sha^T=&\,\sum_{w\in Y_N^{\texttt{a}}st} \zeta_\sha^\Sy({\texttt{b}}fp(w)) w. \end{align*} \end{lem} {\texttt{b}}egin{proof} In each of two lines above, the second equality follows directly from the definition while the first is an immediate consequence of Proposition~\ref{prop:SCVconst}. \end{proof} {\texttt{b}}egin{thm}\label{theo:moduloz2} For any ${\texttt{b}}fs\in\N^d,{\texttt{b}}feta\in(\gG_N)^d$, we have {\texttt{b}}egin{align*} \zeta_\sha^\Sy\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}\equiv \zeta_{\texttt{a}}st^\Sy\lrp{{\texttt{b}}fs}{{\texttt{b}}feta} \pmod{\zeta(2)}. \end{align*} \end{thm} {\texttt{b}}egin{proof} By Lemma~\ref{lem:group-likePsi} and Theorem~\ref{theo:Racinet}, we get {\texttt{b}}egin{align*} \exp(T y_{1,1}) {\mathbb P}si_\sha= {\mathbb P}si_\sha^T=\rho({\mathbb P}si_{\texttt{a}}st^T)=\exp(T y_{1,1}) \gL(y_{1,1}){\mathbb P}si_{\texttt{a}}st, \end{align*} where $\gL(y_{1,1}):=\exp\left(\sum_{n=2}^\infty\frac{(-1)^n}{n}\zeta(n) y_{1,1}^n\right)$. Therefore ${\mathbb P}si_\sha= \gL(y_{1,1}){\mathbb P}si_{\texttt{a}}st$. Using the fact $\zeta(2n)\in \zeta(2)^n\mathbb{Q}$ for $n\in \mathbb{N}$ implies {\texttt{b}}egin{align*} \inv({\mathbb P}si_\sha){\mathbb P}si_\sha=\inv({\mathbb P}si_{\texttt{a}}st)\gL(-y_{1,1})\gL(y_{1,1}){\mathbb P}si_{\texttt{a}}st \equiv \inv({\mathbb P}si_{\texttt{a}}st){\mathbb P}si_{\texttt{a}}st \pmod{\zeta(2)}. \end{align*} Hence, the theorem follows from Lemma~\ref{lem:invPsi}. \end{proof}
3,052
25,576
en
train
0.74.5
\subsection{Shuffle and stuffle relations} We first prove the stuffle relations of the SCVs. {\texttt{b}}egin{thm}\label{thm:astSCVmorphism} The map $\zeta_{\texttt{a}}st^\Sy{\texttt{c}}olon (\fA_N^1,{\texttt{a}}st) \to \CC$ is a homomorphism of algebras, i.e. {\texttt{b}}egin{align*} \zeta_{\texttt{a}}st^\Sy(w{\texttt{a}}st w') = \zeta_{\texttt{a}}st^\Sy(w)\zeta_{\texttt{a}}st^\Sy(w') \end{align*} for all $w,w'\in \fA_N^1$. \end{thm} {\texttt{b}}egin{proof} Since $\zeta_{\texttt{a}}st{\texttt{c}}olon (\fA_N^1,{\texttt{a}}st)\to \CC[T]$ is an algebra homomorphism, its generating series ${\mathbb P}si_{\texttt{a}}st^T$ must be a group-like element of $\gD_*$, i.e., $\gD_*({\mathbb P}si_{\texttt{a}}st^T) = {\mathbb P}si_{\texttt{a}}st^T \otimes {\mathbb P}si_{\texttt{a}}st^T$. Further, it can be checked in a straight-forward manner that $\Delta_{\texttt{a}}st {\texttt{c}}irc\inv = (\inv \otimes \inv){\texttt{c}}irc \Delta_{\texttt{a}}st$. Thus we get {\texttt{b}}egin{align*} \Delta_{\texttt{a}}st{\texttt{b}}ig(\inv({\mathbb P}si_{\texttt{a}}st^T){\mathbb P}si_{\texttt{a}}st^T{\texttt{b}}ig) = {\texttt{b}}ig(\inv({\mathbb P}si_{\texttt{a}}st^T){\mathbb P}si_{\texttt{a}}st^T{\texttt{b}}ig) \otimes {\texttt{b}}ig(\inv({\mathbb P}si_{\texttt{a}}st^T){\mathbb P}si_{\texttt{a}}st^T{\texttt{b}}ig) \end{align*} and Lemma \ref{lem:invPsi} implies the claim. \end{proof} For the shuffle relations we need the \emph{generalized Drinfeld associator ${\mathbb P}hi={\mathbb P}hi_N$ at level $N$}. Enriquez {\texttt{c}}ite{Enriquez2007} defined it as the renormalized holonomy from 0 to 1 of {\texttt{b}}egin{equation} \label{equ:generalizedDrinfeld} H'(z) = \left( \sum_{\eta\in \gG_N {\texttt{c}}up \{0\}} \frac{x_\eta}{z-\eta} \right) H(z), \end{equation} i.e., ${\mathbb P}hi:=H_1^{-1}H_0$, where $H_0,H_1$ are the solutions of \eqref{equ:generalizedDrinfeld} on the open interval $(0,1)$ such that $H_0(z) \sim z^{x_0}=\exp(x_0 \log z)$ when $z\to 0^+$, $H_1(z) \sim (1-z)^{x_1}=\exp(x_1 \log(1-z))$ when $z\to 1^-$. {\texttt{b}}egin{thm} The generalized Drinfeld associator ${\mathbb P}hi$ is the unique element in the Hopf algebra $(\CC\langle\!\langle X_N^{\texttt{a}}st \rangle\!\rangle,\sha,\gD_\sha,\eps_\sha)$ such that {\texttt{b}}egin{itemize} \item[\upshape{(i)}] ${\mathbb P}hi$ is group-like, i.e., $\eps_\sha({\mathbb P}hi)=1$ and $\gD_\sha({\mathbb P}hi)={\mathbb P}hi\otimes{\mathbb P}hi$, \item[\upshape{(ii)}] ${\mathbb P}hi[x_0]={\mathbb P}hi[x_1]=0$, \item[\upshape{(iii)}] $\displaystyle {\mathbb P}hi[{\texttt{b}}fp(x_0^{s_1-1} x_{\eta_1}\dots x_0^{s_d-1} x_{\eta_d})] =(-1)^d \zeta \lrpTZ{{\texttt{b}}fs}{{\texttt{b}}feta}$ for any ${\texttt{b}}fs:=(s_1,\dots,s_d)\in\N^d$ and ${\texttt{b}}feta:=(\eta_1,\dots,\eta_d)\in(\gG_N)^d$. \end{itemize} \end{thm} {\texttt{b}}egin{proof} The uniqueness, the statements in (i), (ii) and the case $(s_1,\eta_1)\ne(1,1)$ of (iii) of the theorem follow directly from {\texttt{c}}ite[App.]{Enriquez2007} and {\texttt{c}}ite[Prop. 5.17]{Deligne05}. By Theorem~\ref{theo:Racinet} (ii), if $(s_1,\eta_1)=(1,1)$ then $\zeta \lrpTZ{{\texttt{b}}fs}{{\texttt{b}}feta}$ is determined uniquely by the admissible values from the shuffle structure by using (ii). But ${\mathbb P}hi[{\texttt{b}}fp(x_0^{s_1-1} x_{\eta_1}\dots x_0^{s_d-1} x_{\eta_d})]$ is also determined uniquely by the coefficients of admissible words from the same shuffle structure so that (iii) still holds even if $(s_1,\eta_1)=(1,1)$. This completes the proof of the theorem. \end{proof} For any $\eta\in\gG_N$, we define the map $r_\eta{\texttt{c}}olon X_N^*\to X_N^*$ by setting {\texttt{b}}egin{align*} r_\eta(x_0^{a_1} x_{\eta_1}^{b_1}\dots x_0^{a_d} x_{\eta_d}^{b_d}) :=x_0^{a_1} x_{\eta_1/\eta}^{b_1}\dots x_0^{a_d} x_{\eta_d/\eta}^{b_d} \end{align*} for all $a_1,b_1,\dots,a_d,b_d\in\Z_{\ge 0}$ and $\eta_1,\dots,\eta_d\in\gG_N$. {\texttt{b}}egin{lem}\label{lem:eta-twistPhi} For any $\eta\in\gG_N$, the \emph{$\eta$-twist ${\mathbb P}hi_{\eta}$ of ${\mathbb P}hi$} defined by {\texttt{b}}egin{align*} {\mathbb P}hi_{\eta}:=\sum_{w\in X_N^{\texttt{a}}st} {\mathbb P}hi[r_\eta(w)] w \end{align*} is group-like for $\gD_\sha$ and ${\mathbb P}hi_{\eta}^{-1}$ is well-defined. \end{lem} {\texttt{b}}egin{proof} For any words $u,v\in X_N^*$, we have {\texttt{b}}egin{multline*} \gD_\sha({\mathbb P}hi_\eta) [u\otimes v]={\mathbb P}hi_\eta[ u\sha v]={\mathbb P}hi[r_\eta ( u\sha v)]\\ ={\mathbb P}hi[r_\eta (u) \sha r_\eta (v)] ={\mathbb P}hi[r_\eta (u)]{\mathbb P}hi[r_\eta(v)] =({\mathbb P}hi_\eta\otimes{\mathbb P}hi_\eta)[u\otimes v], \end{multline*} since ${\mathbb P}hi$ is group-like. Thus $\gD_\sha({\mathbb P}hi_\eta)={\mathbb P}hi_\eta\otimes{\mathbb P}hi_\eta$. Further, ${\mathbb P}hi_{\eta}^{-1}$ is well-defined since {\texttt{b}}egin{align*} ({\mathbb P}hi_{\eta})^{-1}=\sum_{w\in X_N^{\texttt{a}}st} (-1)^{|w|}{\mathbb P}hi[r_\eta(w)] \revs{w} =\sum_{w\in X_N^{\texttt{a}}st} (-1)^{|w|}{\mathbb P}hi[r_\eta(\revs{w})]w=({\mathbb P}hi^{-1})_{\eta}, \end{align*} where $\revs{w}=\ga_d\ga_{d-1}{\texttt{c}}dots\ga_1$ is the reversal of the word $w=\ga_1{\texttt{c}}dots \ga_{d-1}\ga_d\in X_N^{\texttt{a}}st$ with the letters $\ga_j\in X_N$ for $j=1,\ldots,d$. \end{proof} {\texttt{b}}egin{thm}\label{thm:SCVDrinfeldAss} For any word $w\in \fA_N^1$, we have {\texttt{b}}egin{align*} \zeta_\sha^\Sy(w)=(-1)^d\sum_{\eta\in\gG_N} \ol{\eta}\, {\mathbb P}hi_{\eta}^{-1} x_{\eta} {\mathbb P}hi_{\eta} [x_1 w]. \end{align*} \end{thm} {\texttt{b}}egin{proof} First we observe that {\texttt{b}}egin{align}\label{eq:invphi} {\mathbb P}hi_{\eta}^{-1}[x_1 x_0^{s_1-1} x_{\eta_1}{\texttt{c}}dots x_{\eta_{j-1}} x_0^{s_j-1}] = (-1)^{s_1+{\texttt{c}}dots+s_j}{\mathbb P}hi_{\eta}[ x_0^{s_j-1} x_{\eta_{j-1}}{\texttt{c}}dots x_{\eta_1} x_0^{s_1-1}x_1]. \end{align} for $j=1,\ldots,d$. Then we obtain (by setting $\eta_0:=1$) {\texttt{b}}egin{align*} & (-1)^d\sum_{\eta\in\gG_N} \ol{\eta}\, {\mathbb P}hi_{\eta}^{-1} x_{\eta} {\mathbb P}hi_{\eta} {\texttt{b}}ig[x_1 x_0^{s_1-1} x_{\eta_1}\dots x_0^{s_d-1} x_{\eta_d}{\texttt{b}}ig] \\ = &~ (-1)^d \sum_{j=0}^d \ol{\eta_j}\, {\mathbb P}hi_{\eta_j}^{-1}{\texttt{b}}ig[x_1 x_0^{s_1-1} x_{\eta_1}{\texttt{c}}dots x_{\eta_{j-1}}x_0^{s_{j}}{\texttt{b}}ig] {\mathbb P}hi_{\eta_j}{\texttt{b}}ig[x_0^{s_{j+1}-1}x_{\eta_{j+1}}{\texttt{c}}dots x_0^{s_d-1}x_{\eta_d}{\texttt{b}}ig]\\ = &~ (-1)^d \sum_{j=0}^d (-1)^{s_1+{\texttt{c}}dots+s_j}\ol{\eta_j}\, {\mathbb P}hi_{\eta_j}{\texttt{b}}ig[x_0^{s_{j}}x_{\eta_{j-1}}{\texttt{c}}dots x_0^{s_1-1} x_1{\texttt{b}}ig] {\mathbb P}hi_{\eta_j}{\texttt{b}}ig[x_0^{s_{j+1}-1}x_{\eta_{j+1}}{\texttt{c}}dots x_0^{s_d-1}x_{\eta_d}{\texttt{b}}ig]\\ = &~ (-1)^d \sum_{j=0}^d (-1)^{s_1+{\texttt{c}}dots+s_j}\ol{\eta_j}\, {\mathbb P}hi{\texttt{b}}ig[x_0^{s_{j}}x_{\eta_{j-1}/\eta_j}{\texttt{c}}dots x_0^{s_1-1} x_{\eta_0/\eta_j}{\texttt{b}}ig] {\mathbb P}hi{\texttt{b}}ig[x_0^{s_{j+1}-1}x_{\eta_{j+1}/\eta_j}{\texttt{c}}dots x_0^{s_d-1}x_{\eta_d/\eta_j}{\texttt{b}}ig]\\ = &~ (-1)^d \sum_{j=0}^d (-1)^{s_1+{\texttt{c}}dots+s_j}\ol{\eta_j}\, {\texttt{c}}dot (-1)^j \zeta_\sha\LrpTZ{\ s_j\ , \ldots,s_1} {\frac{\eta_{j-1}}{\eta_{j}},\ldots,\frac{\eta_0}{\eta_1} } {\texttt{c}}dot (-1)^{d-j} \zeta_\sha\LrpTZ{s_{j+1},\ldots,\ s_d\ } {\frac{\eta_{j+1}}{\eta_j},\ldots,\frac{\eta_d}{\eta_{d-1}} }\\ = & ~ \zeta_\sha^\Sy \Lrp{s_1,s_2,\ldots,\ \, s_d\ \, }{\eta_1,\frac{\eta_{2}}{\eta_1},\ldots, \frac{\eta_{d}}{\eta_{d-1}}}, \end{align*} by Proposition~\ref{prop:SCVconst}. We have completed our proof. \end{proof}
3,416
25,576
en
train
0.74.6
{\texttt{b}}egin{thm}\label{thm:SCVDrinfeldAss} For any word $w\in \fA_N^1$, we have {\texttt{b}}egin{align*} \zeta_\sha^\Sy(w)=(-1)^d\sum_{\eta\in\gG_N} \ol{\eta}\, {\mathbb P}hi_{\eta}^{-1} x_{\eta} {\mathbb P}hi_{\eta} [x_1 w]. \end{align*} \end{thm} {\texttt{b}}egin{proof} First we observe that {\texttt{b}}egin{align}\label{eq:invphi} {\mathbb P}hi_{\eta}^{-1}[x_1 x_0^{s_1-1} x_{\eta_1}{\texttt{c}}dots x_{\eta_{j-1}} x_0^{s_j-1}] = (-1)^{s_1+{\texttt{c}}dots+s_j}{\mathbb P}hi_{\eta}[ x_0^{s_j-1} x_{\eta_{j-1}}{\texttt{c}}dots x_{\eta_1} x_0^{s_1-1}x_1]. \end{align} for $j=1,\ldots,d$. Then we obtain (by setting $\eta_0:=1$) {\texttt{b}}egin{align*} & (-1)^d\sum_{\eta\in\gG_N} \ol{\eta}\, {\mathbb P}hi_{\eta}^{-1} x_{\eta} {\mathbb P}hi_{\eta} {\texttt{b}}ig[x_1 x_0^{s_1-1} x_{\eta_1}\dots x_0^{s_d-1} x_{\eta_d}{\texttt{b}}ig] \\ = &~ (-1)^d \sum_{j=0}^d \ol{\eta_j}\, {\mathbb P}hi_{\eta_j}^{-1}{\texttt{b}}ig[x_1 x_0^{s_1-1} x_{\eta_1}{\texttt{c}}dots x_{\eta_{j-1}}x_0^{s_{j}}{\texttt{b}}ig] {\mathbb P}hi_{\eta_j}{\texttt{b}}ig[x_0^{s_{j+1}-1}x_{\eta_{j+1}}{\texttt{c}}dots x_0^{s_d-1}x_{\eta_d}{\texttt{b}}ig]\\ = &~ (-1)^d \sum_{j=0}^d (-1)^{s_1+{\texttt{c}}dots+s_j}\ol{\eta_j}\, {\mathbb P}hi_{\eta_j}{\texttt{b}}ig[x_0^{s_{j}}x_{\eta_{j-1}}{\texttt{c}}dots x_0^{s_1-1} x_1{\texttt{b}}ig] {\mathbb P}hi_{\eta_j}{\texttt{b}}ig[x_0^{s_{j+1}-1}x_{\eta_{j+1}}{\texttt{c}}dots x_0^{s_d-1}x_{\eta_d}{\texttt{b}}ig]\\ = &~ (-1)^d \sum_{j=0}^d (-1)^{s_1+{\texttt{c}}dots+s_j}\ol{\eta_j}\, {\mathbb P}hi{\texttt{b}}ig[x_0^{s_{j}}x_{\eta_{j-1}/\eta_j}{\texttt{c}}dots x_0^{s_1-1} x_{\eta_0/\eta_j}{\texttt{b}}ig] {\mathbb P}hi{\texttt{b}}ig[x_0^{s_{j+1}-1}x_{\eta_{j+1}/\eta_j}{\texttt{c}}dots x_0^{s_d-1}x_{\eta_d/\eta_j}{\texttt{b}}ig]\\ = &~ (-1)^d \sum_{j=0}^d (-1)^{s_1+{\texttt{c}}dots+s_j}\ol{\eta_j}\, {\texttt{c}}dot (-1)^j \zeta_\sha\LrpTZ{\ s_j\ , \ldots,s_1} {\frac{\eta_{j-1}}{\eta_{j}},\ldots,\frac{\eta_0}{\eta_1} } {\texttt{c}}dot (-1)^{d-j} \zeta_\sha\LrpTZ{s_{j+1},\ldots,\ s_d\ } {\frac{\eta_{j+1}}{\eta_j},\ldots,\frac{\eta_d}{\eta_{d-1}} }\\ = & ~ \zeta_\sha^\Sy \Lrp{s_1,s_2,\ldots,\ \, s_d\ \, }{\eta_1,\frac{\eta_{2}}{\eta_1},\ldots, \frac{\eta_{d}}{\eta_{d-1}}}, \end{align*} by Proposition~\ref{prop:SCVconst}. We have completed our proof. \end{proof} {\texttt{b}}egin{thm}\label{thm:shuffleSCV} Let $N\ge 1$. For any $w,u\in\fA_1^1$ and $v\in\fA_N^1$, we have {\texttt{b}}egin{align*} \zeta_\sha^\Sy(u\sha v)= \zeta_\sha^\Sy(\tau(u)v). \end{align*} These relations are called linear shuffle relations for the SCVs. \end{thm} {\texttt{b}}egin{proof} It suffices to prove {\texttt{b}}egin{align*} \zeta_\sha^\Sy( (x_0^{s-1}x_1 u) \sha v) =(-1)^s \zeta_\sha^\Sy(u \sha (x_0^{s-1}x_1 v)). \end{align*} for all $s\in\N$. We observe that {\texttt{b}}egin{equation}\label{equ:sumOfShuffles} x_1 \Big( (x_0^{s-1}x_1 u)\sha v-(-1)^s u\sha(x_0^{s-1} x_1 v)\Big) =\sum_{i=0}^{s-1} (-1)^i (x_0^{s-1-i}x_1 u)\sha(x_0^{i}x_1v). \end{equation} By Theorem.~\ref{thm:SCVDrinfeldAss}, it suffices to show that the image of \eqref{equ:sumOfShuffles} under $E:={\mathbb P}hi_{\eta}^{-1}x_1{\mathbb P}hi_{\eta}$ vanishes for all $\eta\in\gG_N$. By Lemma~\ref{lem:eta-twistPhi} {\texttt{b}}egin{equation*} \gD_\sha(E)=({\mathbb P}hi_{\eta}^{-1}\ot{\mathbb P}hi_{\eta}^{-1}) (x_1\ot {\texttt{b}}e+{\texttt{b}}e\ot x_1)({\mathbb P}hi_{\eta}\ot{\mathbb P}hi_{\eta}) =E\ot {\texttt{b}}e+{\texttt{b}}e\ot E. \end{equation*} Therefore $E$ is a primitive element for $\gD_\sha$ so that we can regard it as a Lie element, namely, it acts on shuffle products like a derivation. Hence, for any nonempty words $u,v\in X_N^*$, {\texttt{b}}egin{equation*} E[u \sha v]=E[u]\epsilon_\sha[v]+ \epsilon_\sha[u]E[v]=0. \end{equation*} This completes the proof by Theorem~\ref{thm:SCVDrinfeldAss} since none of the factors in the shuffle products on the right-hand side of Eq.~\eqref{equ:sumOfShuffles} is the empty word as the letter $x_1$ appears in every factor. \end{proof}
1,897
25,576
en
train
0.74.7
\section{Reversal relations of FCVs and SCVs} One of the simplest but very important relations among FCVs and SCVs are the following reversal relations. {\texttt{b}}egin{prop}\label{prop:reversalFCVSCV} Let ${\texttt{b}}fs \in \N^d$, ${\texttt{b}}feta \in (\gG_N)^d$, and define $\pr({\texttt{b}}feta):=\prod_{j=1}^d \eta_j$. Then we have {\texttt{b}}egin{align}\label{equ:reversalFCV} \zeta_{{\texttt{c}}alA(N)} \lrp{\revs{{\texttt{b}}fs}}{\revs{{\texttt{b}}feta}} =&\,(-1)^{|{\texttt{b}}fs|} \pr(\ol{{\texttt{b}}feta}) \zeta_{{\texttt{c}}alA(N)} \lrp{{\texttt{b}}fs}{\ol{{\texttt{b}}feta}}, \\ \zeta_\sha^\Sy\lrp{\revs{{\texttt{b}}fs}}{\revs{{\texttt{b}}feta}} =&\,(-1)^{|{\texttt{b}}fs|} \pr(\ol{{\texttt{b}}feta}) \zeta_\sha^\Sy\lrp{{\texttt{b}}fs}{\ol{{\texttt{b}}feta}},\label{equ:reversalSCVsha}\\ \zeta_{\texttt{a}}st^\Sy\lrp{\revs{{\texttt{b}}fs}}{\revs{{\texttt{b}}feta}} =&\,(-1)^{|{\texttt{b}}fs|} \pr(\ol{{\texttt{b}}feta}) \zeta_{\texttt{a}}st^\Sy\lrp{{\texttt{b}}fs}{\ol{{\texttt{b}}feta}},\label{equ:reversalSCVast} \end{align} where $\revs{{\texttt{b}}fa}:=(a_d,\dots,a_1)$ is the reversal of ${\texttt{b}}fa:=(a_1,\ldots,a_d)$ and $\ol{{\texttt{b}}fa}:=(\ol{a_1},\dots,\ol{a_d})$ is the componentwise conjugation of ${\texttt{b}}fa$. \end{prop} {\texttt{b}}egin{proof} Equation \eqref{equ:reversalFCV} follows easily from the substitution $k_j\to p-k_j$ for the indices in \eqref{equ:defnFCV} by the condition $p\equiv -1\pmod{N}$. Equations~\eqref{equ:reversalSCVsha} and \eqref{equ:reversalSCVast} follow easily from the definitions. \end{proof}
645
25,576
en
train
0.74.8
\section{Numerical examples}\label{sec:numex} In this last section, we provide some numerical examples in support of Conjecture~\ref{conj:Main}. We will need some results from the level 1 case. {\texttt{b}}egin{prop}\label{prop:homogeneousWols1} \emph{({\texttt{c}}ite[Theo.~2.13]{Zhao08b})} Let $s$, $d$ and $N$ be positive integers. Then {\texttt{b}}egin{align}\label{equ:homogeneousWols} \zeta_{{\texttt{c}}alA(N)}\Lrp{\{s\}^d}{\{1\}^d}=0. \end{align} \end{prop} {\texttt{b}}egin{eg}\label{eg:level3} At level 3, by Proposition~\ref{prop:reversalFCVSCV} and Proposition~\ref{prop:homogeneousWols1}, for all $w\in \N$, we have {\texttt{b}}egin{alignat*}{3} \zeta_{{\texttt{c}}alA(3)}\lrp{w}{1}=&\, 0, \quad &\zeta_{{\texttt{c}}alA(3)}\lrp{w,w}{1,1}=&\, 0, \quad & \zeta_{{\texttt{c}}alA(3)}\lrp{w}{\xi_3}=&\, \xi_3^2 (-1)^w \zeta_{{\texttt{c}}alA(3)}\lrp{w}{\xi_3^2},\\ \zeta_{\texttt{a}}st^\Sy\lrp{w}{1}=&\, \gd_w\zeta(w), \quad &\zeta_{\texttt{a}}st^\Sy\lrp{w,w}{1,1}=&\, \gd_w\zeta(w)^2-\zeta(2w), \quad & \zeta_{\texttt{a}}st^\Sy\lrp{w}{\xi_3}=&\, \xi_3^2(-1)^w \zeta_{\texttt{a}}st^\Sy\lrp{w}{\xi_3^2}, \end{alignat*} where $\gd_w= (1+(-1)^w)$, and {\texttt{b}}egin{alignat*}{3} \zeta_{{\texttt{c}}alA(3)}\lrp{w,w}{1,\xi_3}=&\, \xi_3^2 \zeta_{{\texttt{c}}alA(3)}\lrp{w,w}{\xi_3^2,1}, \quad & \zeta_{\texttt{a}}st^\Sy\lrp{w,w}{1,\xi_3}=&\, \xi_3^2 \zeta_{\texttt{a}}st^\Sy\lrp{w,w}{\xi_3^2,1}, \\ \zeta_{{\texttt{c}}alA(3)}\lrp{w,w}{1,\xi_3^2}=&\,\xi_3\zeta_{{\texttt{c}}alA(3)}\lrp{w,w}{\xi_3,1},\quad & \zeta_{\texttt{a}}st^\Sy\lrp{w,w}{1,\xi_3^2}=&\,\xi_3 \zeta_{\texttt{a}}st^\Sy\lrp{w,w}{\xi_3,1}, \\ \zeta_{{\texttt{c}}alA(3)}\lrp{w,w}{\xi_3,\xi_3}=&\,\xi_3\zeta_{{\texttt{c}}alA(3)}\lrp{w,w}{\xi_3^2,\xi_3^2},\quad & \zeta_{\texttt{a}}st^\Sy\lrp{w,w}{\xi_3,\xi_3}=&\,\xi_3 \zeta_{\texttt{a}}st^\Sy\lrp{w,w}{\xi_3^2,\xi_3^2}. \end{alignat*} \end{eg} {\texttt{b}}egin{eg}\label{eg:level4} At level 4, by {\texttt{c}}ite[Cor.\ 2.3]{Tauraso10}, we have {\texttt{b}}egin{align} \zeta_{{\texttt{c}}alA(4)}\lrp{w}{1}=0, \quad \zeta_{{\texttt{c}}alA(4)}\lrp{w}{i^2}= \left\{ {\texttt{b}}egin{array}{ll} 0, & \hbox{if $w$ is even;} \\ -2q_2 , & \hbox{if $w=1$;} \\ -2(1-2^{1-w}) \gb_w, & \hbox{otherwise,} \end{array} \right. \notag\\ \zeta_{{\texttt{c}}alA(4)}\lrp{w}{i}=\sum_{p>k>0} \frac{i^k}{k^w} =E^{(w)}_p+iO^{(w)}_p,\quad \zeta_{{\texttt{c}}alA(4)}\lrp{w}{i^3}=\sum_{p>k>0} \frac{i^{3k}}{k^w}=E^{(w)}_p-iO^{(w)}_p,\label{equ:depth1N=4} \end{align} where $q_2:=((2^{p-1}-1)/p )_{p\in{\texttt{c}}alA(4)}$ is the ${\texttt{c}}alA(4)$-Fermat quotient, $\gb_w:=(B_{p-w}/w)_{p\in{\texttt{c}}alA(4),p>w}$ is the ${\texttt{c}}alA(4)$-Bernoulli number and {\texttt{b}}egin{align*} E^{(w)}_p:=\sum_{p>2k>0}\frac{(-1)^k}{(2k)^w},\quad \quad O^{(w)}_p:=\sum_{p>2k+1>0}\frac{(-1)^k}{(2k+1)^w}. \end{align*} To find more relations, first we observe that {\texttt{b}}egin{align*} \sum_{k=(p-1)/2}^{p-1} \frac{(-1)^k} k \equiv \sum_{p-k=(p-1)/2}^{p-1} \frac{(-1)^{p-k}}{p-k} \equiv \sum_{p>2k>0}\frac{(-1)^k}k\equiv \frac12E^{(1)}_p \pmod{p}. \end{align*} By Eq.~\eqref{equ:depth1N=4}, we have {\texttt{b}}egin{align*} \zeta_{{\texttt{c}}alA(4)}\lrp{1}{i^2}=2 \zeta_{{\texttt{c}}alA(4)}\lrp{1}{i}+2 \zeta_{{\texttt{c}}alA(4)}\lrp{1}{i^3}. \end{align*} This is consistent with Conjecture~\ref{conj:Main} since {\texttt{b}}egin{align}\label{equ:piInSCV14} \zeta_{\texttt{a}}st^\Sy\lrp{1}{i^2}= 2\zeta_{\texttt{a}}st^\Sy\lrp{1}{i}+2\zeta_{\texttt{a}}st^\Sy\lrp{1}{i^3}+2\pi \equiv 2\zeta_{\texttt{a}}st^\Sy\lrp{1}{i}+2\zeta_{\texttt{a}}st^\Sy\lrp{1}{i^3} \pmod{2\pi i \Q(i)}. \end{align} Indeed, we have {\texttt{b}}egin{align*} \zeta_{\texttt{a}}st^\Sy\lrp{1}{i^2}=&\, \Li_1(i^2)-\ol{i^2} \Li_1(\ol{i^2})=2\Li_1(-1)=-2\log(2),\\ \zeta_{\texttt{a}}st^\Sy\lrp{1}{i}=&\, \Li_1(i)+i \Li_1(i^3)=-\log(1-i)-i\log(1+i) =-\frac12\log 2+\frac{\pi i}{2}-\frac{i}{2}\log 2+\frac{\pi}{2},\\ \zeta_{\texttt{a}}st^\Sy\lrp{1}{i^3}=&\, \Li_1(i^3)-i \Li_1(i)=-\log(1+i)+i\log(1-i) =-\frac12\log 2-\frac{\pi i}{2}+\frac{i}{2}\log 2+\frac{\pi}{2}. \end{align*} \end{eg} {}From numerical evidence, we form the following conjecture: {\texttt{b}}egin{conj}\label{conj:level4FCVbasis} Let $w\ge 1$. When the level $N=3,4$, the $\Q(\xi_N)$-vector space $\FCV_{w,N}$ has the following basis: {\texttt{b}}egin{align}\label{equ:BasisN=3,4} \left\{ \zeta_{{\texttt{c}}alA(N)}\Lrp{\{1\}^w}{\xi_N,\xi_N^{\gd_2},\dots,\xi_N^{\gd_w}}{\texttt{c}}olon \gd_2,\dots,\gd_{w}\in \{0,1\} \right\}. \end{align} \end{conj} To find as many $\Q(\gG_N)$-linear relations as possible in weight $w$ we may choose all the known relations in weight $k<w$, multiply them by $\displaystyle \zeta_{{\texttt{c}}alA(N)}\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}$ for all ${\texttt{b}}fs$ of weight $w-k$ and all ${\texttt{b}}feta$, and then expand all the products using the stuffle relation proved in Theorem \ref{thm:stuffleFCV}. All the $\Q(\gG_N)$-linear relations among FCVs of the same weight produced in this way are called \emph{linear stuffle relations of FCVs}. We can similarly define linear stuffle relations of SCVs By using linear shuffle and stuffle relations and the reversal relations we can show that Eq.~\eqref{equ:BasisN=3,4} in Conjecture~\ref{conj:level4FCVbasis} are generating sets in the cases $1\le w\le 4$ and $N=3,4$ but cannot show their linear independence at the moment. Concerning Conjecture~\ref{conj:Main} (i), the inclusion $\nSCV_{w,N}\subseteq \CMZV_{w,N}$ is trivial but the opposite inclusion seems difficult. Note that since $2\pi i\in \SCV_{1,4}$ by Eq.~\eqref{equ:piInSCV14} we have $\nSCV_{w,4}=\SCV_{w,4}$ for all $w$ by the stuffle relations. But from Example~\ref{eg:level3} we see that $\SCV_{1,3}$ is generated by {\texttt{b}}egin{align*} \zeta_{\texttt{a}}st^\Sy\lrp{1}{\xi_3}=\zeta\lrp{1}{\xi_3}-\xi_3^2\zeta\lrp{1}{\xi_3^2} \equiv (1-\xi_3^2)\zeta\lrp{1}{\xi_3} \pmod{2\pi i \Q(\xi_3)}, \end{align*} which should imply that $\SCV_{1,3}=\Big\langle \zeta\lrp{1}{\xi_3} \Big\rangle \ne \nSCV_{1,3}$, since conjecturally $\zeta\lrp{1}{\xi_3}=(\pi i-3\log 3)/6$ and $\pi$ are algebraically independent. Moreover, when the weight $w\le 3$ and the level $N=3,4$ we have numerically verified in both spaces of Conjecture~\ref{conj:Main} (ii), exactly the same linear relations leading to the dimension upper bound $2^{w-1}$ hold (with error bounded by $10^{-99}$ for SCVs and with congruence checked for all primes $p<312$ and $p=1019$ in ${\texttt{c}}alP(N)$ for FCVs).
3,033
25,576
en
train
0.74.9
{}From numerical evidence, we form the following conjecture: {\texttt{b}}egin{conj}\label{conj:level4FCVbasis} Let $w\ge 1$. When the level $N=3,4$, the $\Q(\xi_N)$-vector space $\FCV_{w,N}$ has the following basis: {\texttt{b}}egin{align}\label{equ:BasisN=3,4} \left\{ \zeta_{{\texttt{c}}alA(N)}\Lrp{\{1\}^w}{\xi_N,\xi_N^{\gd_2},\dots,\xi_N^{\gd_w}}{\texttt{c}}olon \gd_2,\dots,\gd_{w}\in \{0,1\} \right\}. \end{align} \end{conj} To find as many $\Q(\gG_N)$-linear relations as possible in weight $w$ we may choose all the known relations in weight $k<w$, multiply them by $\displaystyle \zeta_{{\texttt{c}}alA(N)}\lrp{{\texttt{b}}fs}{{\texttt{b}}feta}$ for all ${\texttt{b}}fs$ of weight $w-k$ and all ${\texttt{b}}feta$, and then expand all the products using the stuffle relation proved in Theorem \ref{thm:stuffleFCV}. All the $\Q(\gG_N)$-linear relations among FCVs of the same weight produced in this way are called \emph{linear stuffle relations of FCVs}. We can similarly define linear stuffle relations of SCVs By using linear shuffle and stuffle relations and the reversal relations we can show that Eq.~\eqref{equ:BasisN=3,4} in Conjecture~\ref{conj:level4FCVbasis} are generating sets in the cases $1\le w\le 4$ and $N=3,4$ but cannot show their linear independence at the moment. Concerning Conjecture~\ref{conj:Main} (i), the inclusion $\nSCV_{w,N}\subseteq \CMZV_{w,N}$ is trivial but the opposite inclusion seems difficult. Note that since $2\pi i\in \SCV_{1,4}$ by Eq.~\eqref{equ:piInSCV14} we have $\nSCV_{w,4}=\SCV_{w,4}$ for all $w$ by the stuffle relations. But from Example~\ref{eg:level3} we see that $\SCV_{1,3}$ is generated by {\texttt{b}}egin{align*} \zeta_{\texttt{a}}st^\Sy\lrp{1}{\xi_3}=\zeta\lrp{1}{\xi_3}-\xi_3^2\zeta\lrp{1}{\xi_3^2} \equiv (1-\xi_3^2)\zeta\lrp{1}{\xi_3} \pmod{2\pi i \Q(\xi_3)}, \end{align*} which should imply that $\SCV_{1,3}=\Big\langle \zeta\lrp{1}{\xi_3} \Big\rangle \ne \nSCV_{1,3}$, since conjecturally $\zeta\lrp{1}{\xi_3}=(\pi i-3\log 3)/6$ and $\pi$ are algebraically independent. Moreover, when the weight $w\le 3$ and the level $N=3,4$ we have numerically verified in both spaces of Conjecture~\ref{conj:Main} (ii), exactly the same linear relations leading to the dimension upper bound $2^{w-1}$ hold (with error bounded by $10^{-99}$ for SCVs and with congruence checked for all primes $p<312$ and $p=1019$ in ${\texttt{c}}alP(N)$ for FCVs). {\texttt{b}}egin{eg}\label{eg:higherWtlevel3,4} In weight 2 level 3 and 4, we can prove rigorously that {\texttt{b}}egin{align*} 3\zeta_{{\texttt{c}}alA(3)}\lrp{2}{\xi_3}=&\, 2\zeta_{{\texttt{c}}alA(3)}\lrp{1,1}{\xi_3,\xi_3}(1-\xi_3) -6\zeta_{{\texttt{c}}alA(3)}\lrp{1,1}{\xi_3,1},\\ 3\zeta_{\texttt{a}}st^\Sy\lrp{2}{\xi_3}=&\, 2\zeta_{\texttt{a}}st^\Sy\lrp{1,1}{\xi_3,\xi_3}(1-\xi_3) -6\zeta_{\texttt{a}}st^\Sy\lrp{1,1}{\xi_3,1}+\frac{(2\pi i)^2}{12}(2+\xi_3),\\ \zeta_{{\texttt{c}}alA(4)}\lrp{2}{i}=&\, (i-1)\zeta_{{\texttt{c}}alA(4)}\lrp{1,1}{i,1}-i\zeta_{{\texttt{c}}alA(4)}\lrp{1,1}{i,i}, \\ \zeta_{\texttt{a}}st^\Sy\lrp{2}{i}=&\, (i-1)\zeta_{\texttt{a}}st^\Sy\lrp{1,1}{i,1}-i\zeta_{\texttt{a}}st^\Sy\lrp{1,1}{i,i} + \frac{2\pi i}{12} \Big( 2(1+i)\zeta_{\texttt{a}}st^\Sy\lrp{1}{i}-i\zeta_{\texttt{a}}st^\Sy\lrp{1}{-1} \Big). \end{align*} In weight 3 level $N=3$ and 4, we have verified numerically {\texttt{b}}egin{align*} \zeta_{{\texttt{c}}alA(3)}\lrp{1,1,1}{1,\xi_3^2,\xi_3}=&\,3\xi_3\zeta_{{\texttt{c}}alA(3)}\lrp{1,1,1}{\xi_3,1,1},\\ \zeta_{\texttt{a}}st^\Sy\lrp{1,1,1}{1,\xi_3^2,\xi_3}=&\,3\xi_3\zeta_{\texttt{a}}st^\Sy\lrp{1,1,1}{\xi_3,1,1}+ \frac{(2\pi i)^2}{24}(1-\xi_3)\zeta_{\texttt{a}}st^\Sy\lrp{1}{\xi_3}-\frac{(2\pi i)^3}{144}(1-\xi_3), \end{align*} and {\texttt{b}}egin{align*} (15-66i)\zeta_{{\texttt{c}}alA(4)}\lrp{1,2}{1,1}=&\, 48\Big( (1+i)\zeta_{{\texttt{c}}alA(4)}\lrp{1,1,1}{i,i,1} -(1+2i)\zeta_{{\texttt{c}}alA(4)}\lrp{1,1,1}{i,i,i} \\ +&\,2(1-i)\zeta_{{\texttt{c}}alA(4)}\lrp{1,1,1}{i,1,i} -3(1-i)\zeta_{{\texttt{c}}alA(4)}\lrp{1,1,1}{i,1,1} \Big), \end{align*} while {\texttt{b}}egin{multline*} (15-66i)\zeta_{\texttt{a}}st^\Sy\lrp{1,2}{1,1}=48\Big( (1+i)\zeta_{\texttt{a}}st^\Sy\lrp{1,1,1}{i,i,1} -(1+2i)\zeta_{\texttt{a}}st^\Sy\lrp{1,1,1}{i,i,i}\\ +2(1-i)\zeta_{\texttt{a}}st^\Sy\lrp{1,1,1}{i,1,i} -3(1-i)\zeta_{\texttt{a}}st^\Sy\lrp{1,1,1}{i,1,1}\Big)\\ +2\pi i \Big[ (106+2i)\zeta_{\texttt{a}}st^\Sy\lrp{1,1}{i,i} -(2-88i) \zeta_{\texttt{a}}st^\Sy\lrp{1,1}{i,1} -(\frac{3}2-11i)\zeta_{\texttt{a}}st^\Sy\lrp{1,1}{-1,-1} -(64+26i)\zeta_{\texttt{a}}st^\Sy\lrp{1,1}{i,-1} \Big]. \end{multline*} \end{eg} We end this paper by an intriguing mystery. During our Maple computation, we found that FCVs should have an interesting structure over $\Q$. For example, numerical evidence suggests that all the weight $w$ and level 4 FCVs should generate a dimension $2^w$ vector space over $\Q$. We wonder how to relate this $\Q$ structure to that of the CMZVs at level 4. {\texttt{b}}ibliographystyle{alpha} {\texttt{b}}ibliography{library} \end{document}
2,267
25,576
en
train
0.75.0
\begin{document} \title[On orthogonal polynomials described by Chebyshev polynomials]{A note on orthogonal polynomials described by Chebyshev polynomials} \author{K. Castillo} \address{CMUC, Department of Mathematics, University of Coimbra, 3001-501 Coimbra, Portugal} \email{[email protected]} \author{M. N. de Jesus} \address{CI$\&$DETS/IPV, Polytechnic Institute of Viseu, ESTGV, Campus Polit\'ecnico de Repeses, 3504-510 Viseu, Portugal} \email{[email protected]} \author{J. Petronilho} \address{CMUC, Department of Mathematics, University of Coimbra, 3001-501 Coimbra, Portugal} \email{[email protected]} \subjclass[2010]{42C05, 33C45} \date{\today} \keywords{Orthogonal polynomials, Chebyshev polynomials, polynomial mappings, positive measures, semiclassical orthogonal polynomials} \begin{abstract} The purpose of this note is to extend in a simple and unified way some results on orthogonal polynomials with respect to the weight function $$\frac{|T_m(x)|^p}{\sqrt{1-x^2}}\;,\quad-1<x<1\;,$$ where $T_m$ is the Chebyshev polynomial of the first kind of degree $m$ and $p>-1$. \end{abstract} \maketitle
423
5,921
en
train
0.75.1
\section{Main result} Let $T_n$ and $U_n$ denote the Chebyshev polynomials of first and second kind, that is, $T_n(\cos\theta)=\cos(n\theta)$ and $U_n(\cos\theta)=\sin\big((n+1)\theta\big)/\sin\theta$ for each nonnegative integer $n$ and $0<\theta<\pi$. Let $\widehat{T}_n$ and $\widehat{U}_n$ denote the corresponding monic polynomials, so that $\widehat{T}_0:=\widehat{U}_0:=1$, $\widehat{T}_n(x):=2^{1-n}T_n(x)$ and $\widehat{U}_n(x):=2^{-n}U_n(x)$ for each positive integer $n$. Set $T_{-n}:=U_{-n}:=\widehat{T}_{-n}:=\widehat{U}_{-n}:=0$ for each positive integer $n$. The reader is assumed familiar with basic properties of Chebyshev polynomials. We prove the following \begin{proposition}\label{main} Fix an integer $m\geq2$. Define $t_0:=0$ and let $(t_n)_{n\geq1}$ be a sequence of nonzero complex numbers such that \begin{align*} & t_{2mn+j}=\mbox{$\frac14$}\;,\quad j\in\{0,1,\dots,2m-1\}\setminus\{0,1,m,m+1\}\;, \\ & t_{2mn}+t_{2mn+1}=t_{2mn+m}+t_{2mn+m+1}=\mbox{$\frac12$}\;, \end{align*} for each nonnegative integer $n$. Let $(P_n)_{n\geq0}$ be the sequence of monic orthogonal polynomials given by \begin{align}\label{pn} P_{n+1}(x)=xP_n(x)-t_nP_{n-1}(x)\;. \end{align} Then $P_j(x)=\widehat{T}_{j}(x)$ for each $j\in\{0,1,\dots,m\}$ and \begin{equation}\label{p2mn+m+j+1} P_{2mn+m+j+1}(x) =\frac{A_j(n;x)Q_{n+1}\big(\widehat{T}_{2m}(x)\big) +4^{-j}t_{2mn+m+1}B_j(n;x)Q_{n}\big(\widehat{T}_{2m}(x)\big)}{\widehat{U}_{m-1}(x)} \end{equation} for each $j\in\{0,1,\dots,2m-1\}$, where \begin{align*} A_j(n;x)&:=\widehat{U}_j(x)+\Big(\mbox{$\frac14$}-t_{2m(n+1)}\Big) \Big(\widehat{U}_{j-m}(x)\widehat{U}_{m-2}(x)-\widehat{U}_{j-m-1}(x)\widehat{U}_{m-1}(x)\Big)\,, \\ B_j(n;x)&:=\widehat{U}_{2m-2-j}(x) \\ &\quad+\Big(\mbox{$\frac14$}-t_{2m(n+1)}\Big) \Big(\widehat{U}_{m-j-3}(x)\widehat{U}_{m-1}(x)-\widehat{U}_{m-j-2}(x)\widehat{U}_{m-2}(x)\Big)\,, \end{align*} and $(Q_n)_{n\geq0}$ is the sequence of monic orthogonal polynomials given by \begin{align*} Q_{n+1}(x)=(x-r_n)Q_n(x)-s_nQ_{n-1}(x)\;, \end{align*} with \begin{align*} r_n&:=\frac{1}{\displaystyle 2^{2m-4}}\Big(t_{2mn+m}t_{2mn+1}+t_{2m(n+1)}t_{2mn+m+1}-\mbox{$\frac18$}\Big)\;,\\ s_n&:=\frac{1}{\displaystyle 4^{2m-4}}\,t_{2mn}t_{2mn+1}t_{2mn+m}t_{2m(n-1)+m+1}\;. \end{align*} Assume furthermore that $t_n>0$ for each positive integer $n$. Then $(P_n)_{n\geq0}$ and $(Q_n)_{n\geq0}$ are orthogonal polynomial sequences with respect to certain positive measures, say $\mu_P$ and $\mu_Q$ respectively. Suppose that $\mu_Q$ is absolutely continuous with weight function $w_Q$ on $[\xi,\eta]$, with $-2^{1-2m}\leq\xi<\eta\leq2^{1-2m}$, i.e., $$ {\rm d}\mu_Q(x)=w_Q(x)\chi_{(\xi,\eta)}(x){\rm d}x\;. $$ Suppose in addition that $w_Q$ satisfies the condition $$C:=\int_\xi^\eta\frac{w_Q(x)}{x+2^{1-2m}}\,{\rm d}x<\infty\;.$$ Then $($up to a positive constant factor$)$ \begin{align}\label{muP} {\rm d}\mu_P(x)=w_P(x)\chi_{E}(x){\rm d}x + M\sum_{j=1}^m\delta\left(x-\cos\frac{(2j-1)\pi}{2m}\right)\;, \end{align} where $$ w_P(x):=\left|\frac{U_{m-1}(x)}{T_m(x)}\right|w_Q\big(\widehat{T}_{2m}(x)\big)\;,\quad x\in E:=\widehat{T}_{2m}^{-1}\big((\xi,\eta)\big)\,, $$ and $M$ is a nonnegative number given by $$ M:=\frac{1}{m}\left(\frac{2^{2m-3}\mu_Q(\mathbb{R})}{t_m}-C\right)\;. $$ \end{proposition} \begin{remark} For $j=2m-1$, \eqref{p2mn+m+j+1} reduces to $$ P_{2mn+m}(x)=\widehat{T}_m(x)\,Q_n\big(\widehat{T}_{2m}(x)\big)\;. $$ \end{remark} \begin{remark}\label{Zmzi} In general, the set $E$ is the union of $2m$ disjoint open intervals separated by the zeros of $U_{2m-1}$. $($If all these intervals have the zeros of $U_{2m-1}$ as boundary points, then $\overline{E}$ reduces to a single interval.$)$ Moreover, $\mbox{\rm supp}\big(\mu_P\big)=\overline{E}$ if $M=0$, and $\mbox{\rm supp}\big(\mu_P\big)=\overline{E}\cup Z_m$ if $M>0$, where $Z_m$ is the set of zeros of $T_m$, i.e., $Z_m:=\big\{z_i:=\cos\big((2i-1)\pi/(2m)\big)\,|\, 1\leq i\leq m\big\}$. \end{remark} \begin{remark} Proposition \ref{main} cannot be considered for $m=1$. Indeed, if $m=1$ the conditions imposed on the sequence $(t_n)_{n\geq1}$ imply $t_2=0$. In any case, after reading Section 2, interested readers are able to derive the corresponding result for $m=1$. \end{remark} In order to illustrate the practical effectiveness of Proposition \ref{main}, we consider a simple example. Fix $p\in\mathbb{C}\setminus\{-1,-2,\ldots\}$ and an integer $m\geq2$. Define $t_{2mn+j}:=1/4$ for each $j\in\{0,1,\ldots,2m-1\}\setminus\{0,1,m,m+1\}$ and \begin{align*} \qquad t_{2mn}&:=\frac{n}{4n+p}\;, & t_{2mn+1}&:=\frac{2n+p}{2(4n+p)}\;, & \quad \\ \qquad t_{2mn+m}&:=\frac{2n+p+1}{2(4n+p+2)}\;, & t_{2mn+m+1}&:=\frac{2n+1}{2(4n+p+2)} & \end{align*} for each nonnegative integer $n$. (If $p=0$ it is understood that $t_1:=1/2$.) These parameters satisfy the hypothesis of Proposition \ref{main}. In this case \begin{align*} r_n&=\frac{1}{2^{2m-1}} \frac{p(p+2)}{(4n+p)(4n+p+4)}\;, \\ s_n&=\frac{1}{4^{2m-1}}\frac{2n(2n-1)(2n+p)(2n+p+1)} {(4n+p-2)(4n+p)^2(4n+p+2)}\;, \end{align*} and so $$Q_n(x)=2^{(1-2m)n}\widehat{P}_n^{\big(-\frac12,\frac{p+1}{2}\big)}\big(2^{2m-1}x\big)\;,$$ where $\widehat{P}_n^{(\alpha,\beta)}$ denotes the monic Jacobi polynomial of degree $n$ (see \cite[Chapter 4]{I2005} and \cite[(6.11)]{M1991}). Hence, by Proposition \ref{main}, \begin{align}\label{ExamplePn} &P_{2mn+m+j+1}(x) \\ &\quad=\frac{(2n+2+p)U_j(x)-pT_m(x)U_{j-m}(x)}{2^{(2m-1)n+m+j-1}(4n+p+4)U_{m-1}(x)} \widehat{P}_{n+1}^{\big(-\frac12,\frac{p+1}{2}\big)}\big(T_{2m}(x)\big) \nonumber \\ &\quad\quad +\frac{(2n+1)\big((2n+2)U_{2m-2-j}(x)-pU_{m-1}(x)T_{m-j-1}(x)\big)} {2^{(2m-1)n+m+j}(4n+p+2)(4n+p+4)U_{m-1}(x)} \widehat{P}_{n}^{\big(-\frac12,\frac{p+1}{2}\big)}\big(T_{2m}(x)\big) \nonumber \end{align} for each nonnegative integer $n$ and each $j\in\{0,1,\dots,2m-1\}$. Furthermore, if $p>-1$ then $(Q_n)_{n\geq0}$ is a sequence of orthogonal polynomials associated with the weight function $w_Q$ on $\big[-2^{1-2m},2^{1-2m}\big]$ given by $$ w_Q(x)=2^{-p/2}\big(1-2^{2m-1}x\big)^{-\frac12}\big(1+2^{2m-1}x\big)^{\frac{p+1}{2}}\;,\quad -2^{1-2m}<x<2^{1-2m}\;. $$ Since \begin{align*} \mu_Q(\mathbb{R})&=\int_{-2^{1-2m}}^{2^{1-2m}}w_Q(x)\,{\rm d}x =2^{2-2m}\,\frac{p+1}{p+2}B\Big(\mbox{$\frac{p+1}{2}$},\mbox{$\frac{1}{2}$}\Big)\;,\\ C&=\int_{-2^{1-2m}}^{2^{1-2m}}\frac{w_Q(x)}{x+2^{1-2m}}\,{\rm d}x =B\Big(\mbox{$\frac{p+1}{2}$},\mbox{$\frac{1}{2}$}\Big)\;, \end{align*} $B$ being the Beta function (see \cite[p. 8]{I2005}), we get $M=0$. Taking into account that $\overline{E}=\widehat{T}_{2m}^{-1}\big(\big[-2^{1-2m},2^{1-2m}\big]\big)={T}_{2m}^{-1}\big([-1,1]\big)=[-1,1]$, we conclude that $(P_n)_{n\geq0}$ is a sequence of orthogonal polynomials associated with the weight function $w_P$ on $[-1,1]$ given by \begin{align} w_P(x)&=2^{-p/2}\,\left|\frac{U_{m-1}(x)}{T_m(x)}\right| \big(1-T_{2m}(x)\big)^{-\frac12}\big(1+T_{2m}(x)\big)^{\frac{p+1}{2}} \nonumber \\ &=\frac{\;|T_m(x)|^p}{\sqrt{1-x^2}}\;,\quad-1<x<1\;. \label{measureTmp} \end{align} \begin{remark} The search of the recurrence coefficients from the weight function \eqref{measureTmp} has arouse interest in recent and not recent years. For $p=2$ see \cite{MVA1989}; for $p=1$ and $m=2$ see \cite{GLi1988}; for $p=2s$ with $s\in\mathbb{N}$ and $m=2$ see \cite{CMM2016}; and for $p=2s$ with $s>-1/2$ and $m=2$ see \cite{CMV2018}. It is worth mentioning that in these works the explicit representation \eqref{ExamplePn} has not been established. While it is true that a representation given by a different polynomial mapping appears in \cite{MVA1989} for $p=2$ using results from \cite{GVA1988}. \end{remark} The sequence $(P_n)_{n\geq0}$ in Proposition \ref{main} may be regarded as an example of sieved orthogonal polynomials $($see e.g. \cite{AAA1984,CI1986,CI1993,CIM1994}$)$. It is important to highlight that we can go even further and show more distinctly how some recent developments of the theory of polynomial mappings (see \cite{MP2010,KMP2017,KMP2019}) apply to this kind of problems, but that would demand a more extended discussion in which we need not now get involved. Indeed, using the general results stated in \cite{KMP2017}, it can be shown that if $(P_n)_{n\geq0}$ or $(Q_n)_{n\geq0}$ in Proposition \ref{main} is semiclassical (see \cite{M1991}), then so is the other one. In this case, we may obtain also the functional (distributional) equation fulfilled by the (moment) regular functional associated with $(P_n)_{n\geq0}$ or $(Q_n)_{n\geq0}$. In particular, as in \cite{KMP2019} or directly from \eqref{ExamplePn}, we may easily derive the linear homogeneous second order ordinary differential equation that the orthogonal polynomials with respect to \eqref{measureTmp} fulfil, which in turn leads to interesting electrostatic models (see \cite[Section 6]{KMP2019} and \cite[Section 3.5]{I2005}).
3,935
5,921
en
train
0.75.2
\section{Proof of Proposition \ref{main}} Set $k:=2m$ and rewrite \eqref{pn} as a system of blocks of recurrence relations \begin{align*} x P_{nk+j}(x)=P_{nk+j+1}(x)+a_n^{(j)}P_{nk+j-1}(x) \;,\quad 0\leq j\leq k-1\;, \end{align*} where $P_{-1}:=0$ and $a_n^{(j)}:=t_{kn+j}$ whenever $(n,j)\neq(0,0)$. Following \cite{CI1993,CIM1994}, we introduce the notation \begin{equation}\nonumber \Delta_n(i,j;x):=\left\{ \begin{array}{rl} 0\,,&j<i-2 \\ 1\,,& j=i-2 \\ x\,,& j=i-1 \end{array} \right. \end{equation} and \begin{equation}\nonumber \Delta_n(i,j;x):=\det\left( \begin{array}{cccccc} x & 1 & 0 & \dots & 0 & 0 \\ a_n^{(i)} & x & 1 & \dots & 0 & 0 \\ 0 & a_n^{(i+1)} & x & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \ldots & x & 1 \\ 0 & 0 & 0 & \ldots & a_n^{(j)} & x \end{array} \right)\;,\quad j\geq i\geq 1\;. \end{equation} The reader should satisfy himself that $$ \Delta_n(m+2,m+j;x)=A_j(n;x)\,, \quad \Delta_{n}(m+j+3,m+k-1;x)=B_j(n;x) $$ for each $j\in\{0,1,\ldots,2m-1\}$. In particular, \begin{align*} \Delta_n(m+2,m+k-1;x)&=\widehat{U}_{2m-1}(x)=\widehat{T}_m(x)\widehat{U}_{m-1}(x) \\ &=\Delta_0(1,m-1;x)\eta_{k-1-m}(x)\,,\quad \eta_{k-1-m}(x):=\widehat{U}_{m-1}(x)\,. \end{align*} Moreover, \begin{align*} r_n(x)&:=a_{n}^{(m+1)}\Delta_{n}(m+3,m+k-1;x)-a_0^{(m+1)}\Delta_{0}(m+3,m+k-1;x) \\ &\quad+a_{n}^{(m)}\Delta_{n-1}(m+2,m+k-2;x)-a_0^{(m)}\Delta_{0}(1,m-2;x)\,\eta_{k-1-m}(x) \\ &=\frac{1}{\displaystyle 2^{2m-4}}\Big(t_{2mn+m}t_{2mn+1}+t_{2m(n+1)}t_{2mn+m+1} -\mbox{$\frac12$}\,t_m-t_{m+1}t_{2m}\Big) \end{align*} for each positive integer $n$, and so $$ \quad r+r_n(0)=r_n\;,\quad r_0=r:=\frac{1}{\displaystyle 2^{2m-4}}\Big(\mbox{$\frac18$}-t_{m+1}t_{2m+1}\Big)\;. $$ Note also that $a_{n}^{(m)}a_{n-1}^{(m+1)}\cdots a_{n-1}^{(m+k-1)}=s_n$. In addition, \begin{align*} \pi_k(x)&:=\Delta_0(1,m;x)\,\eta_{k-1-m}(x)-a_0^{(m+1)}\,\Delta_0(m+3,m+k-1;x)+r\\ &=\widehat{T}_{2m}(x)\;. \end{align*} Consequently, the hypotheses of \cite[Theorem 2.1]{MP2010} are satisfied with $\theta_m(x)=\widehat{T}_{m}(x)$, and so (\ref{p2mn+m+j+1}) follows. Finally, the explicit representation of $\mu_P$ appearing in (\ref{muP}) follows by \cite[Theorem 3.4\footnote{In \cite[Theorem 3.4]{MP2010}, $r=0$. Nevertheless, inspection of its proof shows that the theorem remains true if instead $r=0$ we assume that the polynomials $\pi_k$ and $\theta_m\eta_{k-m-1}$ have real and distinct zeros.} and Remark 3.5]{MP2010}, after noting that \begin{align*} \pi_k'& =2m\widehat{U}_{2m-1}=2m\widehat{T}_{m}\widehat{U}_{m-1}=2m\theta_m\eta_{k-m-1}\;, \\ \pi_k(z_i)& =2^{1-2m}\,T_2\big(T_m(z_i)\big)=2^{1-2m}\,T_2(0)=-2^{1-2m}\;, \\ M_i & :=\frac{ \mu_Q(\mathbb{R})\,\Delta_0(2,m-1;z_i)/ \prod_{j=1}^m a_0^{(j)}-\eta_{k-1-m}(z_i)\,C}{ \theta_m^\prime(z_i)}\\ &=\frac{ \mu_Q(\mathbb{R})\,\widehat{U}_{m-1}(z_i)\cdot 2^{2m-3}t_m^{-1} -\widehat{U}_{m-1}(z_i)\,C}{ m\widehat{U}_{m-1}(z_i)}=M \end{align*} for each $i\in\{1,\ldots,m\}$, where $z_i$ is a zero of $T_m$ (see Remark \ref{Zmzi}). \end{document}
1,563
5,921
en
train
0.76.0
\begin{document} \newcommand{1608.04435}{1608.04435} \renewcommand{042}{042} {\mathbb F}irstPageHeading \ShortArticleName{On the Equivalence of Module Categories over a Group-Theoretical Fusion Category} {\mathcal A}rticleName{On the Equivalence of Module Categories\\ over a Group-Theoretical Fusion Category} {\mathcal A}uthor{Sonia NATALE} {\mathcal A}uthorNameForHeading{S.~Natale} {\mathcal A}ddress{Facultad de Matem\'atica, Astronom\'{\i}a, F\'{\i}sica y Computaci\'on, Universidad Nacional de C\'ordoba,\\ CIEM-CONICET, C\'ordoba, Argentina} {\mathcal E}mail{\href{mailto:[email protected]}{[email protected]}} {\mathcal U}RLaddress{\url{http://www.famaf.unc.edu.ar/~natale/}} {\mathcal A}rticleDates{Received April 28, 2017, in f\/inal form June 14, 2017; Published online June 17, 2017} {\mathcal A}bstract{We give a necessary and suf\/f\/icient condition in terms of group cohomology for two indecomposable module categories over a group-theoretical fusion category ${\mathcal C}$ to be equivalent. This concludes the classif\/ication of such module categories.} {\mathcal K}eywords{fusion category; module category; group-theoretical fusion category} {\mathcal C}lassification{18D10; 16T05} \section{Introduction} Throughout this paper we shall work over an algebraically closed f\/ield $k$ of characteristic zero. Let $\mathcal{C}$ be a fusion category over~$k$. The notion of a ${\mathcal C}$-module category provides a natural categorif\/ication of the notion of representation of a group. The problem of classifying module categories plays a fundamental role in the theory of tensor categories. Two fusion categories ${\mathcal C}$ and ${\mathcal D}$ are called \emph{categorically Morita equivalent} if there exists an indecomposable $\mathcal{C}$-module category $\mathcal{M}$ such that $\mathcal{D}^{\rm op}$ is equivalent as a fusion category to the category $\operatorname{Fun}_{\mathcal C}({\mathcal M}, {\mathcal M})$ of ${\mathcal C}$-module endofunctors of~${\mathcal M}$. This def\/ines an equivalence relation in the class of all fusion categories. Recall that a fusion category ${\mathcal C}$ is called \emph{pointed} if every simple object of ${\mathcal C}$ is invertible. A~basic class of fusion categories consists of those which are categorically Morita equivalent to a pointed fusion category; a fusion category in this class is called \emph{group-theoretical}. Group-theoretical fusion categories can be described in terms of f\/inite groups and their cohomology. The purpose of this note is to give a necessary and suf\/f\/icient condition in terms of group cohomology for two indecomposable module categories over a group-theoretical fusion category to be equivalent. For this, it is enough to solve the same problem for indecomposable module categories over pointed fusion categories. Let ${\mathcal C}$ be a pointed fusion category. Then there exist a f\/inite group $G$ and a~3-cocycle~$\omega$ on~$G$ such that ${\mathcal C} \cong {\mathcal C}(G, \omega)$, where ${\mathcal C}(G, \omega)$ is the category of f\/inite-dimensional $G$-graded vector spaces with associativity constraint def\/ined by $\omega$ (see Section~\ref{cgomega} for a precise def\/inition). Let~${\mathcal M}$ be an indecomposable right ${\mathcal C}$-module category. Then there exists a~subgroup~$H$ of~$G$ and a~2-cochain $\psi \in C^2(H, k^{\times})$ satisfying \begin{gather} \label{cond-alfa}d\psi = \omega\vert_{H \times H \times H}, \end{gather} such that ${\mathcal M}$ is equivalent as a ${\mathcal C}$-module category to the category ${\mathcal M}_0(H, \psi)$ of left $A(H, \psi)$-modules in ${\mathcal C}$, where $A(H, \psi) = k_\psi H$ is the group algebra of $H$ with multiplication twisted by~$\psi$~\cite{ostrik}, \cite[Example~9.7.2]{egno}. The main result of this paper is the following theorem. \begin{Theorem}\label{main} Let $H, L$ be subgroups of $G$ and let $\psi \in C^2(H, k^{\times})$ and $\xi \in C^2(L, k^{\times})$ be $2$-cochains satisfying condition~\eqref{cond-alfa}. Then ${\mathcal M}_0(H, \psi)$ and ${\mathcal M}_0(L, \xi)$ are equivalent as ${\mathcal C}$-module categories if and only if there exists an element $g \in G$ such that $H = {}^gL$ and the class of the $2$-cocycle \begin{gather}\label{cond-equiv} {\xi}^{-1}{\psi}^g \Omega_g\vert_{L\times L} \end{gather} is trivial in $H^2(L, k^{\times})$. \end{Theorem} Here we use the notation ${}^g x = gxg^{-1}$ and $^gL = \{{}^g x\colon x\in L\}$. The 2-cochain ${\psi}^g \in C^2(L, k^{\times})$ is def\/ined by ${\psi}^g(g_1, g_2) = {\psi}({}^gg_1, {}^gg_2)$, for all $g_1, g_2 \in L$, and $\Omega_g\colon G \times G \to k^{\times}$ is given by \begin{gather*}\Omega_g(g_1, g_2) = \frac{\omega({}^gg_1, {}^gg_2, g) \omega(g, g_1, g_2)}{\omega({}^gg_1, g, g_2)}. \end{gather*} Observe that \cite[Theorem 3.1]{ostrik} states that the indecomposable module categories considered in Theorem~\ref{main} are parameterized by conjugacy classes of pairs $(H, \psi)$. However, this conjugation relation is not described loc.\ cit.\ (compare also with \cite{nikshych} and \cite[Section~9.7]{egno}). \looseness=-1 Consider for instance the case where ${\mathcal C}$ is the category of f\/inite-dimensional representations of the 8-dimensional Kac Paljutkin Hopf algebra. Then ${\mathcal C}$ is group-theoretical. In fact, ${\mathcal C} \cong {\mathcal C}(G, \omega, C, 1)$, where $G \cong D_8$ is a semidirect product of the group $L = \mathbb Z_2 \times \mathbb Z_2$ by $C = \mathbb Z_2$ and~$\omega$ is a certain (nontrivial) 3-cocycle on $G$~\cite{schauenburg}. Let $\xi$ represent a nontrivial cohomology class in $H^2(L, k^\times)$. According to the usual conjugation relation among pairs $(L, \psi)$, the result in \cite[Theorem~3.1]{ostrik} would imply that the pairs $(L, 1)$ and $(L, \xi)$, not being conjugated under the adjoint action of $G$, give rise to two inequivalent ${\mathcal C}$-module categories. These module categories both have rank one, whence they give rise to non-isomorphic f\/iber functors on ${\mathcal C}$. However, it follows from \cite[Theorem~4.8(1)]{ma-contemp} that the category ${\mathcal C}$ has a unique f\/iber functor up to isomorphism. In fact, in this example there exists $g \in G$ such that $\Omega_g\vert_{L\times L}$ is a 2-cocycle cohomologous to~$\xi$. See Example~\ref{kp}. Certainly, the condition given in Theorem \ref{main} and the usual conjugacy relation agree in the case where the 3-cocycle $\omega$ is trivial, and it reduces to the conjugation relation among subgroups when they happen to be cyclic. As explained in Section~\ref{adj-action}, condition~\eqref{cond-equiv} is equivalent to the condition that $A(L, \xi)$ and ${}^gA(H, \psi)$ be isomorphic as algebras in~${\mathcal C}$, where $\underline{G} \to \underline{\text{Aut}}_\otimes{\mathcal C}$, $g \mapsto {}^g( \ )$, is the adjoint action of~$G$ on~${\mathcal C}$ (see Lemma~\ref{gdea}). Theorem \ref{main} can be reformulated as follows. \begin{Theorem} Two ${\mathcal C}$-module categories ${\mathcal M}_0(H, \psi)$ and ${\mathcal M}_0(L, \xi)$ are equivalent if and only if the algebras $A(H, \psi)$ and $A(L, \xi)$ are conjugated under the adjoint action of $G$ on ${\mathcal C}$. \end{Theorem} Theorem \ref{main} is proved in Section~\ref{demo}. Our proof relies on the fact that, as happens with group actions on vector spaces, the adjoint action of the group $G$ in the set of equivalence classes of ${\mathcal C}$-module categories is trivial (Lemma~\ref{adj-triv}). In the course of the proof we establish a relation between the 2-cocycle in~\eqref{cond-equiv} and a 2-cocycle attached to $g$, $\psi$ and $\xi$ in~\cite{ostrik} (Remark~\ref{rmk-alfag} and Lemma~\ref{rel-cociclos}). We refer the reader to~\cite{egno} for the main notions on fusion categories and their module categories used throughout.
2,457
13,729
en